Compare commits

...

20 Commits
2.2 ... 2.3.0

Author SHA1 Message Date
Anatoly Sablin
9b4aff58c7 Add migration documentation. 2020-01-30 23:17:01 +03:00
Anatoly Sablin
a20e41574d Update docs. Add a new options and configuration. 2020-01-28 23:20:29 +03:00
Anatoly Sablin
72977d65ae Workaround for postgresql. 2020-01-28 23:18:39 +03:00
Anatoly Sablin
7555fff1a5 Add the postgresql backend for internal storage. 2020-01-28 22:15:26 +03:00
Anatoly Sablin
aed12e5536 Add the --dump-and-exit option to exit after printing the full configuration. 2020-01-28 01:02:43 +03:00
Anatoly Sablin
75efd9921d Improve logging configuration. Introduce the root and the app log levels. 2020-01-28 00:55:39 +03:00
Anatoly Sablin
9219bd4723 Add logging configuration. Add --dump option to just print the full configuration. 2020-01-25 14:57:22 +03:00
Anatoly Sablin
73526be2ac Add configuration to use the legacy query for old synapse to get room names. 2020-01-25 14:04:40 +03:00
ma1uta
b827efca2c Merge pull request #13 from NullIsNot0/fix-room-names-patch
Fix room name retrieval after Synapse dropped table room_names
2020-01-25 10:50:55 +00:00
NullIsNot0
6b7a4c8a23 Fix room name retrieval after Synapse dropped table room_names
Recently Synapse dropped unused (by Synapse itself) table "room_names" which brakes room name retrieval for ma1sd. There is a table "room_stats_state" from which we can retrieve room name by it's ID. Note that people to people conversations do not contain room names, because they are generated on-the-fly by setting other participants names separated by word "and". That's why this query will only get names for rooms where room names are set during creation process (or changed later) and are the same for all participants.
Link to Synapse code where it drops "room_names" table: https://github.com/matrix-org/synapse/blob/master/synapse/storage/data_stores/main/schema/delta/56/drop_unused_event_tables.sql#L17
2020-01-10 18:23:29 +02:00
Anatoly Sablin
47f6239268 Add equals and hashCode methods for the MemoryThreePid. 2020-01-09 22:28:44 +03:00
ma1uta
0d6f65b469 Merge pull request #11 from NullIsNot0/master
Load DNS overwrite config on startup and remove duplicates from identity store before email notifications
2020-01-09 19:25:13 +00:00
Edgars Voroboks
be915aed94 Remove duplicates from identity store before email notifications
I use LDAP for user store. I have set up "mail" and "otherMailbox" as threepid email attributes. When people get invited to rooms, they receive 2 (sometimes 3) invitation e-mails if they have the same e-mail address in LDAP "mail" and "otherMailbox" fields. I think it's a good idea to check identity store for duplicates before sending invitation e-mails.
2020-01-09 20:14:56 +02:00
NullIsNot0
ce938bb4a5 Load DNS overwrite config on startup
I recently noticed that DNS overwrite does not happen. There are messages in logs: "No DNS overwrite for <REDACTED>", but I definitely have configured DNS overwrithng. I think it's because DNS overwriting config is not loaded when ma1sd starts up.
Documented here: https://github.com/ma1uta/ma1sd/blob/master/docs/features/authentication.md#dns-overwrite and here: https://github.com/ma1uta/ma1sd/blob/master/docs/features/directory.md#dns-overwrite
2020-01-07 22:24:26 +02:00
Anatoly Sablin
15db563e8d Add documentation. 2019-12-26 22:49:25 +03:00
Anatoly Sablin
82a538c750 Add an option to enable/disable hash lookup via the LDAP provider. 2019-12-25 22:51:44 +03:00
Anatoly Sablin
84ca8ebbd9 Add support of the MSC2134 (Identity hash lookup) for the LDAP provider. 2019-12-25 00:13:07 +03:00
Anatoly Sablin
774ebf4fa8 Fix for #9. Proper wrap the handles with the sanitize handler. 2019-12-16 22:47:24 +03:00
Anatoly Sablin
eb1326c56a Add unique id for the accepted table.
Add a little more logs.
2019-12-10 22:29:00 +03:00
Anatoly Sablin
10cdb4360e Fix homeserver verification with wildcards certificates.
Disable v2 by default.
Add migration to fix the accepted table (due to sqlite unable to change constraint, drop table and create again).
Fix displaying the expiration period of the new token.
Remove duplicated code.
Use v1 single lookup when receive the request with `none` algorithm and the only one argument.
Hide v2 endpoint if v2 API disabled.
2019-12-10 00:10:13 +03:00
34 changed files with 709 additions and 212 deletions

View File

@@ -108,7 +108,7 @@ dependencies {
compile 'net.i2p.crypto:eddsa:0.3.0' compile 'net.i2p.crypto:eddsa:0.3.0'
// LDAP connector // LDAP connector
compile 'org.apache.directory.api:api-all:1.0.0' compile 'org.apache.directory.api:api-all:1.0.3'
// DNS lookups // DNS lookups
compile 'dnsjava:dnsjava:2.1.9' compile 'dnsjava:dnsjava:2.1.9'

View File

@@ -7,7 +7,7 @@ Default values:
```.yaml ```.yaml
matrix: matrix:
v1: true # deprecated v1: true # deprecated
v2: true v2: false
``` ```
To disable change value to `false`. To disable change value to `false`.
@@ -19,8 +19,14 @@ matrix:
``` ```
NOTE: Riot Web version 1.5.5 and below checks the v1 for backward compatibility. NOTE: Riot Web version 1.5.5 and below checks the v1 for backward compatibility.
NOTE: v2 disabled by default in order to preserve backward compatibility.
## Terms ## Terms
###### Requires: No.
Administrator can omit terms configuration. In this case the terms checking will be disabled.
Example: Example:
```.yaml ```.yaml
policy: policy:
@@ -45,7 +51,7 @@ Where:
- `version` -- the terms version. - `version` -- the terms version.
- `lang` -- the term language. - `lang` -- the term language.
- `name` -- the name of the term. - `name` -- the name of the term.
- `url` -- the url of the term. - `url` -- the url of the term. Might be any url (i.e. from another host) for a html page.
- `regexp` -- regexp patterns for API which should be available only after accepting the terms. - `regexp` -- regexp patterns for API which should be available only after accepting the terms.
API will be checks for accepted terms only with authorization. API will be checks for accepted terms only with authorization.
@@ -72,6 +78,10 @@ There is only one exception: [`POST /_matrix/identity/v2/terms`](https://matrix.
Hashes and the pepper updates together according to the `rotationPolicy`. Hashes and the pepper updates together according to the `rotationPolicy`.
###### Requires: No.
In case the `none` algorithms ma1sd will be lookup using the v1 bulk API.
```.yaml ```.yaml
hashing: hashing:
enabled: true # enable or disable the hash lookup MSC2140 (default is false) enabled: true # enable or disable the hash lookup MSC2140 (default is false)
@@ -126,5 +136,11 @@ exec:
hashEnabled: true # enable the hash lookup (defaults is false) hashEnabled: true # enable the hash lookup (defaults is false)
``` ```
For ldap providers:
```.yaml
ldap:
lookup: true
```
NOTE: Federation requests work only with `none` algorithms. NOTE: Federation requests work only with `none` algorithms.

View File

@@ -58,7 +58,47 @@ Commonly the `server.publicUrl` should be the same value as the `trusted_third_p
## Storage ## Storage
### SQLite ### SQLite
`storage.provider.sqlite.database`: Absolute location of the SQLite database ```yaml
storage:
backend: sqlite # default
provider:
sqlite:
database: /var/lib/ma1sd/store.db # Absolute location of the SQLite database
```
### Postgresql
```yaml
storage:
backend: postgresql
provider:
postgresql:
database: //localhost:5432/ma1sd
username: ma1sd
password: secret_password
```
See [the migration instruction](migration-to-postgresql.md) from sqlite to postgresql
## Logging
```yaml
logging:
root: error # default level for all loggers (apps and thirdparty libraries)
app: info # log level only for the ma1sd
```
Possible value: `trace`, `debug`, `info`, `warn`, `error`, `off`.
Default value for root level: `info`.
Value for app level can be specified via `MA1SD_LOG_LEVEL` environment variable, configuration or start options.
Default value for app level: `info`.
| start option | equivalent configuration |
| --- | --- |
| | app: info |
| -v | app: debug |
| -vv | app: trace |
## Identity stores ## Identity stores
See the [Identity stores](stores/README.md) for specific configuration See the [Identity stores](stores/README.md) for specific configuration

View File

@@ -0,0 +1,41 @@
# Migration from sqlite to postgresql
Starting from the version 2.3.0 ma1sd support postgresql for internal storage in addition to sqlite (parameters `storage.backend`).
#### Migration steps
1. create the postgresql database and user for ma1sd storage
2. create a backup for sqlite storage (default location: /var/lib/ma1sd/store.db)
3. migrate data from sqlite to postgresql
4. change ma1sd configuration to use the postgresql
For data migration is it possible to use https://pgloader.io tool.
Example of the migration command:
```shell script
pgloader --with "quote identifiers" /path/to/store.db pgsql://ma1sd_user:ma1sd_password@host:port/database
```
or (short version for database on localhost)
```shell script
pgloader --with "quote identifiers" /path/to/store.db pgsql://ma1sd_user:ma1sd_password@localhost/ma1sd
```
An option `--with "quote identifies"` used to create case sensitive tables.
ma1sd_user - postgresql user for ma1sd.
ma1sd_password - password of the postgresql user.
host - postgresql host
post - database port (default 5432)
database - database name.
Configuration example for postgresql storage:
```yaml
storage:
backend: postgresql
provider:
postgresql:
database: '//localhost/ma1sd' # or full variant //192.168.1.100:5432/ma1sd_database
username: 'ma1sd_user'
password: 'ma1sd_password'
```

View File

@@ -89,7 +89,7 @@ ldap:
#### 3PIDs #### 3PIDs
You can also change the attribute lists for 3PID, like email or phone numbers. You can also change the attribute lists for 3PID, like email or phone numbers.
The following example would overwrite the [default list of attributes](../../src/main/java/io/kamax/ma1sd/config/ldap/LdapConfig.java#L64) The following example would overwrite the [default list of attributes](../../src/main/java/io/kamax/mxisd/config/ldap/LdapConfig.java#L64)
for emails and phone number: for emails and phone number:
```yaml ```yaml
ldap: ldap:

View File

@@ -22,7 +22,7 @@
matrix: matrix:
domain: '' domain: ''
v1: true # deprecated v1: true # deprecated
v2: true # MSC2140 API v2 v2: false # MSC2140 API v2. Disabled by default in order to preserve backward compatibility.
################ ################
@@ -115,36 +115,59 @@ threepid:
#### MSC2134 (hash lookup) #### MSC2134 (hash lookup)
hashing: #hashing:
enabled: false # enable or disable the hash lookup MSC2140 (default is false) # enabled: false # enable or disable the hash lookup MSC2140 (default is false)
pepperLength: 20 # length of the pepper value (default is 20) # pepperLength: 20 # length of the pepper value (default is 20)
rotationPolicy: per_requests # or `per_seconds` how often the hashes will be updating # rotationPolicy: per_requests # or `per_seconds` how often the hashes will be updating
hashStorageType: sql # or `in_memory` where the hashes will be stored # hashStorageType: sql # or `in_memory` where the hashes will be stored
algorithms: # algorithms:
- none # the same as v1 bulk lookup # - none # the same as v1 bulk lookup
- sha256 # hash the 3PID and pepper. # - sha256 # hash the 3PID and pepper.
delay: 2m # how often hashes will be updated if rotation policy = per_seconds (default is 10s) # delay: 2m # how often hashes will be updated if rotation policy = per_seconds (default is 10s)
requests: 10 # how many lookup requests will be performed before updating hashes if rotation policy = per_requests (default is 10) # requests: 10 # how many lookup requests will be performed before updating hashes if rotation policy = per_requests (default is 10)
### hash lookup for synapseSql provider. ### hash lookup for synapseSql provider.
# synapseSql: # synapseSql:
# lookup: # lookup:
# query: 'select user_id as mxid, medium, address from user_threepids' # query for retrive 3PIDs for hashes. # query: 'select user_id as mxid, medium, address from user_threepids' # query for retrive 3PIDs for hashes.
# legacyRoomNames: false # use the old query to get room names.
### hash lookup for ldap provider (with example of the ldap configuration)
# ldap:
# enabled: true
# lookup: true # hash lookup
# connection:
# host: 'ldap.domain.tld'
# port: 389
# bindDn: 'cn=admin,dc=domain,dc=tld'
# bindPassword: 'Secret'
# baseDNs:
# - 'dc=domain,dc=tld'
# attribute:
# uid:
# type: 'uid' # or mxid
# value: 'cn'
# name: 'displayName'
# identity:
# filter: '(objectClass=inetOrgPerson)'
#### MSC2140 (Terms) #### MSC2140 (Terms)
policy: #policy:
policies: # policies:
term_name: # term name # term_name: # term name
version: 1.0 # version # version: 1.0 # version
terms: # terms:
en: # lang # en: # lang
name: term name en # localized name # name: term name en # localized name
url: https://ma1sd.host.tld/term_en.html # localized url # url: https://ma1sd.host.tld/term_en.html # localized url
fe: # lang # fe: # lang
name: term name fr # localized name # name: term name fr # localized name
url: https://ma1sd.host.tld/term_fr.html # localized url # url: https://ma1sd.host.tld/term_fr.html # localized url
regexp: # regexp:
- '/_matrix/identity/v2/account.*' # - '/_matrix/identity/v2/account.*'
- '/_matrix/identity/v2/hash_details' # - '/_matrix/identity/v2/hash_details'
- '/_matrix/identity/v2/lookup' # - '/_matrix/identity/v2/lookup'
#
# logging:
# root: trace # logging level

View File

@@ -104,30 +104,30 @@ public class HttpMxisd {
HttpHandler asNotFoundHandler = SaneHandler.around(new AsNotFoundHandler(m.getAs())); HttpHandler asNotFoundHandler = SaneHandler.around(new AsNotFoundHandler(m.getAs()));
final RoutingHandler handler = Handlers.routing() final RoutingHandler handler = Handlers.routing()
.add("OPTIONS", "/**", SaneHandler.around(new OptionsHandler())) .add("OPTIONS", "/**", sane(new OptionsHandler()))
// Status endpoints // Status endpoints
.get(StatusHandler.Path, SaneHandler.around(new StatusHandler())) .get(StatusHandler.Path, sane(new StatusHandler()))
.get(VersionHandler.Path, SaneHandler.around(new VersionHandler())) .get(VersionHandler.Path, sane(new VersionHandler()))
// Authentication endpoints // Authentication endpoints
.get(LoginHandler.Path, SaneHandler.around(new LoginGetHandler(m.getAuth(), m.getHttpClient()))) .get(LoginHandler.Path, sane(new LoginGetHandler(m.getAuth(), m.getHttpClient())))
.post(LoginHandler.Path, SaneHandler.around(new LoginPostHandler(m.getAuth()))) .post(LoginHandler.Path, sane(new LoginPostHandler(m.getAuth())))
.post(RestAuthHandler.Path, SaneHandler.around(new RestAuthHandler(m.getAuth()))) .post(RestAuthHandler.Path, sane(new RestAuthHandler(m.getAuth())))
// Directory endpoints // Directory endpoints
.post(UserDirectorySearchHandler.Path, SaneHandler.around(new UserDirectorySearchHandler(m.getDirectory()))) .post(UserDirectorySearchHandler.Path, sane(new UserDirectorySearchHandler(m.getDirectory())))
// Profile endpoints // Profile endpoints
.get(ProfileHandler.Path, SaneHandler.around(new ProfileHandler(m.getProfile()))) .get(ProfileHandler.Path, sane(new ProfileHandler(m.getProfile())))
.get(InternalProfileHandler.Path, SaneHandler.around(new InternalProfileHandler(m.getProfile()))) .get(InternalProfileHandler.Path, sane(new InternalProfileHandler(m.getProfile())))
// Registration endpoints // Registration endpoints
.post(Register3pidRequestTokenHandler.Path, .post(Register3pidRequestTokenHandler.Path,
SaneHandler.around(new Register3pidRequestTokenHandler(m.getReg(), m.getClientDns(), m.getHttpClient()))) sane(new Register3pidRequestTokenHandler(m.getReg(), m.getClientDns(), m.getHttpClient())))
// Invite endpoints // Invite endpoints
.post(RoomInviteHandler.Path, SaneHandler.around(new RoomInviteHandler(m.getHttpClient(), m.getClientDns(), m.getInvite()))) .post(RoomInviteHandler.Path, sane(new RoomInviteHandler(m.getHttpClient(), m.getClientDns(), m.getInvite())))
// Application Service endpoints // Application Service endpoints
.get(AsUserHandler.Path, asUserHandler) .get(AsUserHandler.Path, asUserHandler)
@@ -139,7 +139,7 @@ public class HttpMxisd {
.put("/transactions/{" + AsTransactionHandler.ID + "}", asTxnHandler) // Legacy endpoint .put("/transactions/{" + AsTransactionHandler.ID + "}", asTxnHandler) // Legacy endpoint
// Banned endpoints // Banned endpoints
.get(InternalInfoHandler.Path, SaneHandler.around(new InternalInfoHandler())); .get(InternalInfoHandler.Path, sane(new InternalInfoHandler()));
keyEndpoints(handler); keyEndpoints(handler);
identityEndpoints(handler); identityEndpoints(handler);
termsEndpoints(handler); termsEndpoints(handler);
@@ -189,23 +189,32 @@ public class HttpMxisd {
} }
private void accountEndpoints(RoutingHandler routingHandler) { private void accountEndpoints(RoutingHandler routingHandler) {
routingHandler.post(AccountRegisterHandler.Path, SaneHandler.around(new AccountRegisterHandler(m.getAccMgr()))); MatrixConfig matrixConfig = m.getConfig().getMatrix();
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new AccountGetUserInfoHandler(m.getAccMgr())), if (matrixConfig.isV2()) {
routingHandler.post(AccountRegisterHandler.Path, sane(new AccountRegisterHandler(m.getAccMgr())));
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new AccountGetUserInfoHandler(m.getAccMgr()),
AccountGetUserInfoHandler.Path, true); AccountGetUserInfoHandler.Path, true);
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new AccountLogoutHandler(m.getAccMgr())), wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new AccountLogoutHandler(m.getAccMgr()),
AccountLogoutHandler.Path, true); AccountLogoutHandler.Path, true);
} }
}
private void termsEndpoints(RoutingHandler routingHandler) { private void termsEndpoints(RoutingHandler routingHandler) {
routingHandler.get(GetTermsHandler.PATH, new GetTermsHandler(m.getConfig().getPolicy())); MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
routingHandler.get(GetTermsHandler.PATH, sane(new GetTermsHandler(m.getConfig().getPolicy())));
routingHandler.post(AcceptTermsHandler.PATH, sane(new AcceptTermsHandler(m.getAccMgr()))); routingHandler.post(AcceptTermsHandler.PATH, sane(new AcceptTermsHandler(m.getAccMgr())));
} }
}
private void hashEndpoints(RoutingHandler routingHandler) { private void hashEndpoints(RoutingHandler routingHandler) {
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new HashDetailsHandler(m.getHashManager())), MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new HashDetailsHandler(m.getHashManager()),
HashDetailsHandler.PATH, true); HashDetailsHandler.PATH, true);
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.POST, wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.POST,
sane(new HashLookupHandler(m.getIdentity(), m.getHashManager())), HashLookupHandler.Path, true); new HashLookupHandler(m.getIdentity(), m.getHashManager()), HashLookupHandler.Path, true);
}
} }
private void addEndpoints(RoutingHandler routingHandler, HttpString method, boolean useAuthorization, ApiHandler... handlers) { private void addEndpoints(RoutingHandler routingHandler, HttpString method, boolean useAuthorization, ApiHandler... handlers) {
@@ -218,11 +227,11 @@ public class HttpMxisd {
HttpHandler httpHandler) { HttpHandler httpHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix(); MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV1()) { if (matrixConfig.isV1()) {
routingHandler.add(method, apiHandler.getPath(IdentityServiceAPI.V1), httpHandler); routingHandler.add(method, apiHandler.getPath(IdentityServiceAPI.V1), sane(httpHandler));
} }
if (matrixConfig.isV2()) { if (matrixConfig.isV2()) {
String path = apiHandler.getPath(IdentityServiceAPI.V2); wrapWithTokenAndAuthorizationHandlers(routingHandler, method, httpHandler, apiHandler.getPath(IdentityServiceAPI.V2),
wrapWithTokenAndAuthorizationHandlers(routingHandler, method, httpHandler, path, useAuthorization); useAuthorization);
} }
} }
@@ -236,7 +245,7 @@ public class HttpMxisd {
} else { } else {
wrappedHandler = httpHandler; wrappedHandler = httpHandler;
} }
routingHandler.add(method, url, wrappedHandler); routingHandler.add(method, url, sane(wrappedHandler));
} }
@NotNull @NotNull

View File

@@ -27,6 +27,8 @@ import io.kamax.mxisd.auth.AuthProviders;
import io.kamax.mxisd.backend.IdentityStoreSupplier; import io.kamax.mxisd.backend.IdentityStoreSupplier;
import io.kamax.mxisd.backend.sql.synapse.Synapse; import io.kamax.mxisd.backend.sql.synapse.Synapse;
import io.kamax.mxisd.config.MxisdConfig; import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.PostgresqlStorageConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.crypto.CryptoFactory; import io.kamax.mxisd.crypto.CryptoFactory;
import io.kamax.mxisd.crypto.KeyManager; import io.kamax.mxisd.crypto.KeyManager;
import io.kamax.mxisd.crypto.SignatureManager; import io.kamax.mxisd.crypto.SignatureManager;
@@ -109,7 +111,20 @@ public class Mxisd {
IdentityServerUtils.setHttpClient(httpClient); IdentityServerUtils.setHttpClient(httpClient);
srvFetcher = new RemoteIdentityServerFetcher(httpClient); srvFetcher = new RemoteIdentityServerFetcher(httpClient);
store = new OrmLiteSqlStorage(cfg); StorageConfig.BackendEnum storageBackend = cfg.getStorage().getBackend();
StorageConfig.Provider storageProvider = cfg.getStorage().getProvider();
switch (storageBackend) {
case sqlite:
store = new OrmLiteSqlStorage(storageBackend, storageProvider.getSqlite().getDatabase());
break;
case postgresql:
PostgresqlStorageConfig postgresql = storageProvider.getPostgresql();
store = new OrmLiteSqlStorage(storageBackend, postgresql.getDatabase(), postgresql.getUsername(), postgresql.getPassword());
break;
default:
throw new IllegalStateException("Storage provider hasn't been configured");
}
keyMgr = CryptoFactory.getKeyManager(cfg.getKey()); keyMgr = CryptoFactory.getKeyManager(cfg.getKey());
signMgr = CryptoFactory.getSignatureManager(cfg, keyMgr); signMgr = CryptoFactory.getSignatureManager(cfg, keyMgr);
clientDns = new ClientDnsOverwrite(cfg.getDns().getOverwrite()); clientDns = new ClientDnsOverwrite(cfg.getDns().getOverwrite());

View File

@@ -44,28 +44,43 @@ public class MxisdStandaloneExec {
try { try {
MxisdConfig cfg = null; MxisdConfig cfg = null;
Iterator<String> argsIt = Arrays.asList(args).iterator(); Iterator<String> argsIt = Arrays.asList(args).iterator();
boolean dump = false;
boolean exit = false;
while (argsIt.hasNext()) { while (argsIt.hasNext()) {
String arg = argsIt.next(); String arg = argsIt.next();
if (StringUtils.equalsAny(arg, "-h", "--help", "-?", "--usage")) { switch (arg) {
case "-h":
case "--help":
case "-?":
case "--usage":
System.out.println("Available arguments:" + System.lineSeparator()); System.out.println("Available arguments:" + System.lineSeparator());
System.out.println(" -h, --help Show this help message"); System.out.println(" -h, --help Show this help message");
System.out.println(" --version Print the version then exit"); System.out.println(" --version Print the version then exit");
System.out.println(" -c, --config Set the configuration file location"); System.out.println(" -c, --config Set the configuration file location");
System.out.println(" -v Increase log level (log more info)"); System.out.println(" -v Increase log level (log more info)");
System.out.println(" -vv Further increase log level"); System.out.println(" -vv Further increase log level");
System.out.println(" --dump Dump the full ma1sd configuration");
System.out.println(" --dump-and-exit Dump the full ma1sd configuration and exit");
System.out.println(" "); System.out.println(" ");
System.exit(0); System.exit(0);
} else if (StringUtils.equals(arg, "-v")) { return;
case "-v":
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "debug"); System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "debug");
} else if (StringUtils.equals(arg, "-vv")) { break;
case "-vv":
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "trace"); System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "trace");
} else if (StringUtils.equalsAny(arg, "-c", "--config")) { break;
case "-c":
case "--config":
String cfgFile = argsIt.next(); String cfgFile = argsIt.next();
cfg = YamlConfigLoader.loadFromFile(cfgFile); cfg = YamlConfigLoader.loadFromFile(cfgFile);
} else if (StringUtils.equals("--version", arg)) { break;
System.out.println(Mxisd.Version); case "--dump-and-exit":
System.exit(0); exit = true;
} else { case "--dump":
dump = true;
break;
default:
System.err.println("Invalid argument: " + arg); System.err.println("Invalid argument: " + arg);
System.err.println("Try '--help' for available arguments"); System.err.println("Try '--help' for available arguments");
System.exit(1); System.exit(1);
@@ -76,6 +91,13 @@ public class MxisdStandaloneExec {
cfg = YamlConfigLoader.tryLoadFromFile("ma1sd.yaml").orElseGet(MxisdConfig::new); cfg = YamlConfigLoader.tryLoadFromFile("ma1sd.yaml").orElseGet(MxisdConfig::new);
} }
if (dump) {
YamlConfigLoader.dumpConfig(cfg);
if (exit) {
System.exit(0);
}
}
log.info("ma1sd starting"); log.info("ma1sd starting");
log.info("Version: {}", Mxisd.Version); log.info("Version: {}", Mxisd.Version);

View File

@@ -144,7 +144,13 @@ public class MembershipEventProcessor implements EventTypeProcessor {
.collect(Collectors.toList()); .collect(Collectors.toList());
log.info("Found {} email(s) in identity store for {}", tpids.size(), inviteeId); log.info("Found {} email(s) in identity store for {}", tpids.size(), inviteeId);
for (_ThreePid tpid : tpids) { log.info("Removing duplicates from identity store");
List<_ThreePid> uniqueTpids = tpids.stream()
.distinct()
.collect(Collectors.toList());
log.info("There are {} unique email(s) in identity store for {}", uniqueTpids.size(), inviteeId);
for (_ThreePid tpid : uniqueTpids) {
log.info("Found Email to notify about room invitation: {}", tpid.getAddress()); log.info("Found Email to notify about room invitation: {}", tpid.getAddress());
Map<String, String> properties = new HashMap<>(); Map<String, String> properties = new HashMap<>();
profiler.getDisplayName(sender).ifPresent(name -> properties.put("sender_display_name", name)); profiler.getDisplayName(sender).ifPresent(name -> properties.put("sender_display_name", name));

View File

@@ -10,6 +10,7 @@ import io.kamax.mxisd.exception.BadRequestException;
import io.kamax.mxisd.exception.InvalidCredentialsException; import io.kamax.mxisd.exception.InvalidCredentialsException;
import io.kamax.mxisd.exception.NotFoundException; import io.kamax.mxisd.exception.NotFoundException;
import io.kamax.mxisd.matrix.HomeserverFederationResolver; import io.kamax.mxisd.matrix.HomeserverFederationResolver;
import io.kamax.mxisd.matrix.HomeserverVerifier;
import io.kamax.mxisd.storage.IStorage; import io.kamax.mxisd.storage.IStorage;
import io.kamax.mxisd.storage.ormlite.dao.AccountDao; import io.kamax.mxisd.storage.ormlite.dao.AccountDao;
import org.apache.http.HttpStatus; import org.apache.http.HttpStatus;
@@ -22,18 +23,10 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.security.cert.Certificate;
import java.security.cert.CertificateParsingException;
import java.security.cert.X509Certificate;
import java.time.Instant; import java.time.Instant;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.Objects; import java.util.Objects;
import java.util.UUID; import java.util.UUID;
import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.SSLPeerUnverifiedException;
import javax.net.ssl.SSLSession;
public class AccountManager { public class AccountManager {
@@ -80,7 +73,7 @@ public class AccountManager {
homeserverURL + "/_matrix/federation/v1/openid/userinfo?access_token=" + openIdToken.getAccessToken()); homeserverURL + "/_matrix/federation/v1/openid/userinfo?access_token=" + openIdToken.getAccessToken());
String userId; String userId;
try (CloseableHttpClient httpClient = HttpClients.custom() try (CloseableHttpClient httpClient = HttpClients.custom()
.setSSLHostnameVerifier(new MatrixHostnameVerifier(homeserverTarget.getDomain())).build()) { .setSSLHostnameVerifier(new HomeserverVerifier(homeserverTarget.getDomain())).build()) {
try (CloseableHttpResponse response = httpClient.execute(getUserInfo)) { try (CloseableHttpResponse response = httpClient.execute(getUserInfo)) {
int statusCode = response.getStatusLine().getStatusCode(); int statusCode = response.getStatusLine().getStatusCode();
if (statusCode == HttpStatus.SC_OK) { if (statusCode == HttpStatus.SC_OK) {
@@ -170,74 +163,4 @@ public class AccountManager {
public MatrixConfig getMatrixConfig() { public MatrixConfig getMatrixConfig() {
return matrixConfig; return matrixConfig;
} }
public static class MatrixHostnameVerifier implements HostnameVerifier {
private static final String ALT_DNS_NAME_TYPE = "2";
private static final String ALT_IP_ADDRESS_TYPE = "7";
private final String matrixHostname;
public MatrixHostnameVerifier(String matrixHostname) {
this.matrixHostname = matrixHostname;
}
@Override
public boolean verify(String hostname, SSLSession session) {
try {
Certificate peerCertificate = session.getPeerCertificates()[0];
if (peerCertificate instanceof X509Certificate) {
X509Certificate x509Certificate = (X509Certificate) peerCertificate;
if (x509Certificate.getSubjectAlternativeNames() == null) {
return false;
}
for (String altSubjectName : getAltSubjectNames(x509Certificate)) {
if (match(altSubjectName)) {
return true;
}
}
}
} catch (SSLPeerUnverifiedException | CertificateParsingException e) {
LOGGER.error("Unable to check remote host", e);
return false;
}
return false;
}
private List<String> getAltSubjectNames(X509Certificate x509Certificate) {
List<String> subjectNames = new ArrayList<>();
try {
for (List<?> subjectAlternativeNames : x509Certificate.getSubjectAlternativeNames()) {
if (subjectAlternativeNames == null
|| subjectAlternativeNames.size() < 2
|| subjectAlternativeNames.get(0) == null
|| subjectAlternativeNames.get(1) == null) {
continue;
}
String subjectType = subjectAlternativeNames.get(0).toString();
switch (subjectType) {
case ALT_DNS_NAME_TYPE:
case ALT_IP_ADDRESS_TYPE:
subjectNames.add(subjectAlternativeNames.get(1).toString());
break;
default:
LOGGER.trace("Unusable subject type: " + subjectType);
}
}
} catch (CertificateParsingException e) {
LOGGER.error("Unable to parse the certificate", e);
return Collections.emptyList();
}
return subjectNames;
}
private boolean match(String altSubjectName) {
if (altSubjectName.startsWith("*.")) {
return altSubjectName.toLowerCase().endsWith(matrixHostname.toLowerCase());
} else {
return matrixHostname.equalsIgnoreCase(altSubjectName);
}
}
}
} }

View File

@@ -41,7 +41,10 @@ import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional; import java.util.Optional;
import java.util.stream.Collectors;
public class LdapThreePidProvider extends LdapBackend implements IThreePidProvider { public class LdapThreePidProvider extends LdapBackend implements IThreePidProvider {
@@ -137,4 +140,65 @@ public class LdapThreePidProvider extends LdapBackend implements IThreePidProvid
return mappingsFound; return mappingsFound;
} }
private List<String> getAttributes() {
final List<String> attributes = getCfg().getAttribute().getThreepid().values().stream().flatMap(List::stream)
.collect(Collectors.toList());
attributes.add(getUidAtt());
return attributes;
}
private Optional<String> getAttributeValue(Entry entry, List<String> attributes) {
return attributes.stream()
.map(attr -> getAttribute(entry, attr))
.filter(Objects::nonNull)
.filter(Optional::isPresent)
.map(Optional::get)
.findFirst();
}
@Override
public Iterable<ThreePidMapping> populateHashes() {
List<ThreePidMapping> result = new ArrayList<>();
if (!getCfg().getIdentity().isLookup()) {
return result;
}
String filter = getCfg().getIdentity().getFilter();
try (LdapConnection conn = getConn()) {
bind(conn);
log.debug("Query: {}", filter);
List<String> attributes = getAttributes();
log.debug("Attributes: {}", GsonUtil.build().toJson(attributes));
for (String baseDN : getBaseDNs()) {
log.debug("Base DN: {}", baseDN);
try (EntryCursor cursor = conn.search(baseDN, filter, SearchScope.SUBTREE, attributes.toArray(new String[0]))) {
while (cursor.next()) {
Entry entry = cursor.get();
log.info("Found possible match, DN: {}", entry.getDn().getName());
Optional<String> mxid = getAttribute(entry, getUidAtt());
if (!mxid.isPresent()) {
continue;
}
for (Map.Entry<String, List<String>> attributeEntry : getCfg().getAttribute().getThreepid().entrySet()) {
String medium = attributeEntry.getKey();
getAttributeValue(entry, attributeEntry.getValue())
.ifPresent(s -> result.add(new ThreePidMapping(medium, s, buildMatrixIdFromUid(mxid.get()))));
}
}
} catch (CursorLdapReferralException e) {
log.warn("3PID is only available via referral, skipping", e);
} catch (IOException | LdapException | CursorException e) {
log.error("Unable to fetch 3PID mappings", e);
}
}
} catch (LdapException | IOException e) {
log.error("Unable to fetch 3PID mappings", e);
}
return result;
}
} }

View File

@@ -107,14 +107,16 @@ public abstract class SqlThreePidProvider implements IThreePidProvider {
@Override @Override
public Iterable<ThreePidMapping> populateHashes() { public Iterable<ThreePidMapping> populateHashes() {
if (StringUtils.isBlank(cfg.getLookup().getQuery())) { String query = cfg.getLookup().getQuery();
if (StringUtils.isBlank(query)) {
log.warn("Lookup query not configured, skip."); log.warn("Lookup query not configured, skip.");
return Collections.emptyList(); return Collections.emptyList();
} }
log.debug("Uses query to match users: {}", query);
List<ThreePidMapping> result = new ArrayList<>(); List<ThreePidMapping> result = new ArrayList<>();
try (Connection connection = pool.get()) { try (Connection connection = pool.get()) {
PreparedStatement statement = connection.prepareStatement(cfg.getLookup().getQuery()); PreparedStatement statement = connection.prepareStatement(query);
try (ResultSet resultSet = statement.executeQuery()) { try (ResultSet resultSet = statement.executeQuery()) {
while (resultSet.next()) { while (resultSet.next()) {
String mxid = resultSet.getString("mxid"); String mxid = resultSet.getString("mxid");

View File

@@ -29,15 +29,19 @@ import java.util.Optional;
public class Synapse { public class Synapse {
private SqlConnectionPool pool; private final SqlConnectionPool pool;
private final SynapseSqlProviderConfig providerConfig;
public Synapse(SynapseSqlProviderConfig sqlCfg) { public Synapse(SynapseSqlProviderConfig sqlCfg) {
this.pool = new SqlConnectionPool(sqlCfg); this.pool = new SqlConnectionPool(sqlCfg);
providerConfig = sqlCfg;
} }
public Optional<String> getRoomName(String id) { public Optional<String> getRoomName(String id) {
String query = providerConfig.isLegacyRoomNames() ? SynapseQueries.getLegacyRoomName() : SynapseQueries.getRoomName();
return pool.withConnFunction(conn -> { return pool.withConnFunction(conn -> {
PreparedStatement stmt = conn.prepareStatement(SynapseQueries.getRoomName()); try (PreparedStatement stmt = conn.prepareStatement(query)) {
stmt.setString(1, id); stmt.setString(1, id);
ResultSet rSet = stmt.executeQuery(); ResultSet rSet = stmt.executeQuery();
if (!rSet.next()) { if (!rSet.next()) {
@@ -45,7 +49,7 @@ public class Synapse {
} }
return Optional.ofNullable(rSet.getString(1)); return Optional.ofNullable(rSet.getString(1));
}
}); });
} }
} }

View File

@@ -72,7 +72,10 @@ public class SynapseQueries {
} }
public static String getRoomName() { public static String getRoomName() {
return "select r.name from room_names r, events e, (select r1.room_id,max(e1.origin_server_ts) ts from room_names r1, events e1 where r1.event_id = e1.event_id group by r1.room_id) rle where e.origin_server_ts = rle.ts and r.event_id = e.event_id and r.room_id = ?"; return "select name from room_stats_state where room_id = ? limit 1";
} }
public static String getLegacyRoomName() {
return "select r.name from room_names r, events e, (select r1.room_id,max(e1.origin_server_ts) ts from room_names r1, events e1 where r1.event_id = e1.event_id group by r1.room_id) rle where e.origin_server_ts = rle.ts and r.event_id = e.event_id and r.room_id = ?";
}
} }

View File

@@ -0,0 +1,47 @@
package io.kamax.mxisd.config;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class LoggingConfig {
private static final Logger LOGGER = LoggerFactory.getLogger("App");
private String root;
private String app;
public String getRoot() {
return root;
}
public void setRoot(String root) {
this.root = root;
}
public String getApp() {
return app;
}
public void setApp(String app) {
this.app = app;
}
public void build() {
LOGGER.info("Logging config:");
if (StringUtils.isNotBlank(getRoot())) {
LOGGER.info(" Default log level: {}", getRoot());
System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", getRoot());
}
String appLevel = System.getProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd");
if (StringUtils.isNotBlank(appLevel)) {
LOGGER.info(" Logging level set by environment: {}", appLevel);
} else if (StringUtils.isNotBlank(getApp())) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", getApp());
LOGGER.info(" Logging level set by the configuration: {}", getApp());
} else {
LOGGER.info(" Logging level hasn't set, use default");
}
}
}

View File

@@ -64,7 +64,7 @@ public class MatrixConfig {
private String domain; private String domain;
private Identity identity = new Identity(); private Identity identity = new Identity();
private boolean v1 = true; private boolean v1 = true;
private boolean v2 = true; private boolean v2 = false;
public String getDomain() { public String getDomain() {
return domain; return domain;

View File

@@ -117,6 +117,7 @@ public class MxisdConfig {
private WordpressConfig wordpress = new WordpressConfig(); private WordpressConfig wordpress = new WordpressConfig();
private PolicyConfig policy = new PolicyConfig(); private PolicyConfig policy = new PolicyConfig();
private HashingConfig hashing = new HashingConfig(); private HashingConfig hashing = new HashingConfig();
private LoggingConfig logging = new LoggingConfig();
public AppServiceConfig getAppsvc() { public AppServiceConfig getAppsvc() {
return appsvc; return appsvc;
@@ -342,6 +343,14 @@ public class MxisdConfig {
this.hashing = hashing; this.hashing = hashing;
} }
public LoggingConfig getLogging() {
return logging;
}
public void setLogging(LoggingConfig logging) {
this.logging = logging;
}
public MxisdConfig inMemory() { public MxisdConfig inMemory() {
getKey().setPath(":memory:"); getKey().setPath(":memory:");
getStorage().getProvider().getSqlite().setDatabase(":memory:"); getStorage().getProvider().getSqlite().setDatabase(":memory:");
@@ -350,6 +359,8 @@ public class MxisdConfig {
} }
public MxisdConfig build() { public MxisdConfig build() {
getLogging().build();
if (StringUtils.isBlank(getServer().getName())) { if (StringUtils.isBlank(getServer().getName())) {
getServer().setName(getMatrix().getDomain()); getServer().setName(getMatrix().getDomain());
log.debug("server.name is empty, using matrix.domain"); log.debug("server.name is empty, using matrix.domain");
@@ -359,6 +370,7 @@ public class MxisdConfig {
getAuth().build(); getAuth().build();
getAccountConfig().build(); getAccountConfig().build();
getDirectory().build(); getDirectory().build();
getDns().build();
getExec().build(); getExec().build();
getFirebase().build(); getFirebase().build();
getForward().build(); getForward().build();

View File

@@ -0,0 +1,54 @@
/*
* mxisd - Matrix Identity Server Daemon
* Copyright (C) 2017 Kamax Sarl
*
* https://www.kamax.io/
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package io.kamax.mxisd.config;
public class PostgresqlStorageConfig {
private String database;
private String username;
private String password;
public String getDatabase() {
return database;
}
public void setDatabase(String database) {
this.database = database;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}

View File

@@ -21,14 +21,21 @@
package io.kamax.mxisd.config; package io.kamax.mxisd.config;
import io.kamax.mxisd.exception.ConfigurationException; import io.kamax.mxisd.exception.ConfigurationException;
import org.apache.commons.lang.StringUtils;
public class StorageConfig { public class StorageConfig {
public enum BackendEnum {
sqlite,
postgresql
}
public static class Provider { public static class Provider {
private SQLiteStorageConfig sqlite = new SQLiteStorageConfig(); private SQLiteStorageConfig sqlite = new SQLiteStorageConfig();
private PostgresqlStorageConfig postgresql = new PostgresqlStorageConfig();
public SQLiteStorageConfig getSqlite() { public SQLiteStorageConfig getSqlite() {
return sqlite; return sqlite;
} }
@@ -37,16 +44,23 @@ public class StorageConfig {
this.sqlite = sqlite; this.sqlite = sqlite;
} }
public PostgresqlStorageConfig getPostgresql() {
return postgresql;
} }
private String backend = "sqlite"; public void setPostgresql(PostgresqlStorageConfig postgresql) {
this.postgresql = postgresql;
}
}
private BackendEnum backend = BackendEnum.sqlite; // or postgresql
private Provider provider = new Provider(); private Provider provider = new Provider();
public String getBackend() { public BackendEnum getBackend() {
return backend; return backend;
} }
public void setBackend(String backend) { public void setBackend(BackendEnum backend) {
this.backend = backend; this.backend = backend;
} }
@@ -59,7 +73,7 @@ public class StorageConfig {
} }
public void build() { public void build() {
if (StringUtils.isBlank(getBackend())) { if (getBackend() == null) {
throw new ConfigurationException("storage.backend"); throw new ConfigurationException("storage.backend");
} }
} }

View File

@@ -76,4 +76,14 @@ public class YamlConfigLoader {
} }
} }
public static void dumpConfig(MxisdConfig cfg) {
Representer rep = new Representer();
rep.getPropertyUtils().setBeanAccess(BeanAccess.FIELD);
rep.getPropertyUtils().setAllowReadOnlyProperties(true);
rep.getPropertyUtils().setSkipMissingProperties(true);
Yaml yaml = new Yaml(new Constructor(MxisdConfig.class), rep);
String dump = yaml.dump(cfg);
log.info("Full configuration:\n{}", dump);
}
} }

View File

@@ -233,6 +233,7 @@ public abstract class LdapConfig {
private String filter; private String filter;
private String token = "%3pid"; private String token = "%3pid";
private Map<String, String> medium = new HashMap<>(); private Map<String, String> medium = new HashMap<>();
private boolean lookup = false;
public String getFilter() { public String getFilter() {
return filter; return filter;
@@ -262,6 +263,13 @@ public abstract class LdapConfig {
this.medium = medium; this.medium = medium;
} }
public boolean isLookup() {
return lookup;
}
public void setLookup(boolean lookup) {
this.lookup = lookup;
}
} }
public static class Profile { public static class Profile {

View File

@@ -45,4 +45,21 @@ public class MemoryThreePid implements _ThreePid {
this.address = address; this.address = address;
} }
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
MemoryThreePid threePid = (MemoryThreePid) o;
if (!medium.equals(threePid.medium)) return false;
return address.equals(threePid.address);
}
@Override
public int hashCode() {
int result = medium.hashCode();
result = 31 * result + address.hashCode();
return result;
}
} }

View File

@@ -24,9 +24,23 @@ import io.kamax.mxisd.UserIdType;
import io.kamax.mxisd.backend.sql.synapse.SynapseQueries; import io.kamax.mxisd.backend.sql.synapse.SynapseQueries;
import io.kamax.mxisd.config.sql.SqlConfig; import io.kamax.mxisd.config.sql.SqlConfig;
import org.apache.commons.lang.StringUtils; import org.apache.commons.lang.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class SynapseSqlProviderConfig extends SqlConfig { public class SynapseSqlProviderConfig extends SqlConfig {
private transient final Logger log = LoggerFactory.getLogger(SynapseSqlProviderConfig.class);
private boolean legacyRoomNames = false;
public boolean isLegacyRoomNames() {
return legacyRoomNames;
}
public void setLegacyRoomNames(boolean legacyRoomNames) {
this.legacyRoomNames = legacyRoomNames;
}
@Override @Override
protected String getProviderName() { protected String getProviderName() {
return "Synapse SQL"; return "Synapse SQL";
@@ -65,4 +79,12 @@ public class SynapseSqlProviderConfig extends SqlConfig {
printConfig(); printConfig();
} }
@Override
protected void printConfig() {
super.printConfig();
if (isEnabled()) {
log.info("Use legacy room name query: {}", isLegacyRoomNames());
}
}
} }

View File

@@ -35,7 +35,9 @@ public class HashEngine {
hashStorage.clear(); hashStorage.clear();
for (IThreePidProvider provider : providers) { for (IThreePidProvider provider : providers) {
try { try {
LOGGER.info("Populate hashes from the handler: {}", provider.getClass().getCanonicalName());
for (ThreePidMapping pidMapping : provider.populateHashes()) { for (ThreePidMapping pidMapping : provider.populateHashes()) {
LOGGER.debug("Found 3PID: {}", pidMapping);
hashStorage.add(pidMapping, hash(pidMapping)); hashStorage.add(pidMapping, hash(pidMapping));
} }
} catch (Exception e) { } catch (Exception e) {

View File

@@ -48,7 +48,7 @@ public class AccountRegisterHandler extends BasicHttpHandler {
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {
LOGGER.info("Registration from domain: {}, expired at {}", openIdToken.getMatrixServerName(), LOGGER.info("Registration from domain: {}, expired at {}", openIdToken.getMatrixServerName(),
new Date(openIdToken.getExpiresIn())); new Date(System.currentTimeMillis() + openIdToken.getExpiresIn()));
} }
String token = accountManager.register(openIdToken); String token = accountManager.register(openIdToken);

View File

@@ -31,6 +31,8 @@ import io.kamax.mxisd.http.undertow.handler.ApiHandler;
import io.kamax.mxisd.http.undertow.handler.identity.share.LookupHandler; import io.kamax.mxisd.http.undertow.handler.identity.share.LookupHandler;
import io.kamax.mxisd.lookup.BulkLookupRequest; import io.kamax.mxisd.lookup.BulkLookupRequest;
import io.kamax.mxisd.lookup.HashLookupRequest; import io.kamax.mxisd.lookup.HashLookupRequest;
import io.kamax.mxisd.lookup.SingleLookupReply;
import io.kamax.mxisd.lookup.SingleLookupRequest;
import io.kamax.mxisd.lookup.ThreePidMapping; import io.kamax.mxisd.lookup.ThreePidMapping;
import io.kamax.mxisd.lookup.strategy.LookupStrategy; import io.kamax.mxisd.lookup.strategy.LookupStrategy;
import io.undertow.server.HttpServerExchange; import io.undertow.server.HttpServerExchange;
@@ -40,6 +42,7 @@ import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Optional;
public class HashLookupHandler extends LookupHandler implements ApiHandler { public class HashLookupHandler extends LookupHandler implements ApiHandler {
@@ -82,6 +85,7 @@ public class HashLookupHandler extends LookupHandler implements ApiHandler {
default: default:
throw new InvalidParamException(); throw new InvalidParamException();
} }
hashManager.getRotationStrategy().newRequest();
} }
private void noneAlgorithm(HttpServerExchange exchange, HashLookupRequest request, ClientHashLookupRequest input) throws Exception { private void noneAlgorithm(HttpServerExchange exchange, HashLookupRequest request, ClientHashLookupRequest input) throws Exception {
@@ -89,6 +93,18 @@ public class HashLookupHandler extends LookupHandler implements ApiHandler {
throw new InvalidParamException(); throw new InvalidParamException();
} }
ClientHashLookupAnswer answer = null;
if (input.getAddresses() != null && input.getAddresses().size() > 0) {
if (input.getAddresses().size() == 1) {
answer = noneSingleLookup(request, input);
} else {
answer = noneBulkLookup(request, input);
}
}
respondJson(exchange, answer != null ? answer : new ClientHashLookupAnswer());
}
private ClientHashLookupAnswer noneBulkLookup(HashLookupRequest request, ClientHashLookupRequest input) throws Exception {
BulkLookupRequest bulkLookupRequest = new BulkLookupRequest(); BulkLookupRequest bulkLookupRequest = new BulkLookupRequest();
List<ThreePidMapping> mappings = new ArrayList<>(); List<ThreePidMapping> mappings = new ArrayList<>();
for (String address : input.getAddresses()) { for (String address : input.getAddresses()) {
@@ -107,7 +123,26 @@ public class HashLookupHandler extends LookupHandler implements ApiHandler {
} }
log.info("Finished bulk lookup request from {}", request.getRequester()); log.info("Finished bulk lookup request from {}", request.getRequester());
respondJson(exchange, answer); return answer;
}
private ClientHashLookupAnswer noneSingleLookup(HashLookupRequest request, ClientHashLookupRequest input) {
SingleLookupRequest singleLookupRequest = new SingleLookupRequest();
String address = input.getAddresses().get(0);
String[] parts = address.split(" ");
singleLookupRequest.setThreePid(parts[0]);
singleLookupRequest.setType(parts[1]);
ClientHashLookupAnswer answer = new ClientHashLookupAnswer();
Optional<SingleLookupReply> singleLookupReply = strategy.find(singleLookupRequest);
if (singleLookupReply.isPresent()) {
SingleLookupReply reply = singleLookupReply.get();
answer.getMappings().put(address, reply.getMxid().toString());
}
log.info("Finished single lookup request from {}", request.getRequester());
return answer;
} }
private void sha256Algorithm(HttpServerExchange exchange, HashLookupRequest request, ClientHashLookupRequest input) { private void sha256Algorithm(HttpServerExchange exchange, HashLookupRequest request, ClientHashLookupRequest input) {

View File

@@ -46,6 +46,4 @@ public interface LookupStrategy {
Optional<SingleLookupReply> findRecursive(SingleLookupRequest request); Optional<SingleLookupReply> findRecursive(SingleLookupRequest request);
CompletableFuture<List<ThreePidMapping>> find(BulkLookupRequest requests); CompletableFuture<List<ThreePidMapping>> find(BulkLookupRequest requests);
CompletableFuture<List<ThreePidMapping>> find(HashLookupRequest request);
} }

View File

@@ -26,17 +26,23 @@ import io.kamax.matrix.json.MatrixJson;
import io.kamax.mxisd.config.MxisdConfig; import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.exception.ConfigurationException; import io.kamax.mxisd.exception.ConfigurationException;
import io.kamax.mxisd.hash.HashManager; import io.kamax.mxisd.hash.HashManager;
import io.kamax.mxisd.hash.storage.HashStorage; import io.kamax.mxisd.lookup.ALookupRequest;
import io.kamax.mxisd.lookup.*; import io.kamax.mxisd.lookup.BulkLookupRequest;
import io.kamax.mxisd.lookup.SingleLookupReply;
import io.kamax.mxisd.lookup.SingleLookupRequest;
import io.kamax.mxisd.lookup.ThreePidMapping;
import io.kamax.mxisd.lookup.fetcher.IBridgeFetcher; import io.kamax.mxisd.lookup.fetcher.IBridgeFetcher;
import io.kamax.mxisd.lookup.provider.IThreePidProvider; import io.kamax.mxisd.lookup.provider.IThreePidProvider;
import org.apache.commons.codec.digest.DigestUtils; import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.tuple.Pair;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.*; import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -238,13 +244,4 @@ public class RecursivePriorityLookupStrategy implements LookupStrategy {
result.complete(mapFoundAll); result.complete(mapFoundAll);
return bulkLookupInProgress.remove(payloadId); return bulkLookupInProgress.remove(payloadId);
} }
@Override
public CompletableFuture<List<ThreePidMapping>> find(HashLookupRequest request) {
HashStorage hashStorage = hashManager.getHashStorage();
CompletableFuture<List<ThreePidMapping>> result = new CompletableFuture<>();
result.complete(hashStorage.find(request.getHashes()).stream().map(Pair::getValue).collect(Collectors.toList()));
hashManager.getRotationStrategy().newRequest();
return result;
}
} }

View File

@@ -77,7 +77,8 @@ public class HomeserverVerifier implements HostnameVerifier {
private boolean match(String altSubjectName) { private boolean match(String altSubjectName) {
if (altSubjectName.startsWith("*.")) { if (altSubjectName.startsWith("*.")) {
return altSubjectName.toLowerCase().endsWith(matrixHostname.toLowerCase()); String subjectNameWithoutMask = altSubjectName.substring(1); // remove wildcard
return matrixHostname.toLowerCase().endsWith(subjectNameWithoutMask.toLowerCase());
} else { } else {
return matrixHostname.equalsIgnoreCase(altSubjectName); return matrixHostname.equalsIgnoreCase(altSubjectName);
} }

View File

@@ -28,8 +28,8 @@ import com.j256.ormlite.stmt.QueryBuilder;
import com.j256.ormlite.support.ConnectionSource; import com.j256.ormlite.support.ConnectionSource;
import com.j256.ormlite.table.TableUtils; import com.j256.ormlite.table.TableUtils;
import io.kamax.matrix.ThreePid; import io.kamax.matrix.ThreePid;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.PolicyConfig; import io.kamax.mxisd.config.PolicyConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.exception.ConfigurationException; import io.kamax.mxisd.exception.ConfigurationException;
import io.kamax.mxisd.exception.InternalServerError; import io.kamax.mxisd.exception.InternalServerError;
import io.kamax.mxisd.exception.InvalidCredentialsException; import io.kamax.mxisd.exception.InvalidCredentialsException;
@@ -39,6 +39,7 @@ import io.kamax.mxisd.storage.IStorage;
import io.kamax.mxisd.storage.dao.IThreePidSessionDao; import io.kamax.mxisd.storage.dao.IThreePidSessionDao;
import io.kamax.mxisd.storage.ormlite.dao.ASTransactionDao; import io.kamax.mxisd.storage.ormlite.dao.ASTransactionDao;
import io.kamax.mxisd.storage.ormlite.dao.AccountDao; import io.kamax.mxisd.storage.ormlite.dao.AccountDao;
import io.kamax.mxisd.storage.ormlite.dao.ChangelogDao;
import io.kamax.mxisd.storage.ormlite.dao.HashDao; import io.kamax.mxisd.storage.ormlite.dao.HashDao;
import io.kamax.mxisd.storage.ormlite.dao.HistoricalThreePidInviteIO; import io.kamax.mxisd.storage.ormlite.dao.HistoricalThreePidInviteIO;
import io.kamax.mxisd.storage.ormlite.dao.AcceptedDao; import io.kamax.mxisd.storage.ormlite.dao.AcceptedDao;
@@ -46,12 +47,15 @@ import io.kamax.mxisd.storage.ormlite.dao.ThreePidInviteIO;
import io.kamax.mxisd.storage.ormlite.dao.ThreePidSessionDao; import io.kamax.mxisd.storage.ormlite.dao.ThreePidSessionDao;
import org.apache.commons.lang.StringUtils; import org.apache.commons.lang.StringUtils;
import org.apache.commons.lang3.tuple.Pair; import org.apache.commons.lang3.tuple.Pair;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException; import java.io.IOException;
import java.sql.SQLException; import java.sql.SQLException;
import java.time.Instant; import java.time.Instant;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Collection; import java.util.Collection;
import java.util.Date;
import java.util.List; import java.util.List;
import java.util.Optional; import java.util.Optional;
import java.util.UUID; import java.util.UUID;
@@ -59,6 +63,8 @@ import java.util.stream.Collectors;
public class OrmLiteSqlStorage implements IStorage { public class OrmLiteSqlStorage implements IStorage {
private static final Logger LOGGER = LoggerFactory.getLogger(OrmLiteSqlStorage.class);
@FunctionalInterface @FunctionalInterface
private interface Getter<T> { private interface Getter<T> {
@@ -73,42 +79,84 @@ public class OrmLiteSqlStorage implements IStorage {
} }
public static class Migrations {
public static final String FIX_ACCEPTED_DAO = "2019_12_09__2254__fix_accepted_dao";
}
private Dao<ThreePidInviteIO, String> invDao; private Dao<ThreePidInviteIO, String> invDao;
private Dao<HistoricalThreePidInviteIO, String> expInvDao; private Dao<HistoricalThreePidInviteIO, String> expInvDao;
private Dao<ThreePidSessionDao, String> sessionDao; private Dao<ThreePidSessionDao, String> sessionDao;
private Dao<ASTransactionDao, String> asTxnDao; private Dao<ASTransactionDao, String> asTxnDao;
private Dao<AccountDao, String> accountDao; private Dao<AccountDao, String> accountDao;
private Dao<AcceptedDao, String> acceptedDao; private Dao<AcceptedDao, Long> acceptedDao;
private Dao<HashDao, String> hashDao; private Dao<HashDao, String> hashDao;
private Dao<ChangelogDao, String> changelogDao;
private StorageConfig.BackendEnum backend;
public OrmLiteSqlStorage(MxisdConfig cfg) { public OrmLiteSqlStorage(StorageConfig.BackendEnum backend, String path) {
this(cfg.getStorage().getBackend(), cfg.getStorage().getProvider().getSqlite().getDatabase()); this(backend, path, null, null);
} }
public OrmLiteSqlStorage(String backend, String path) { public OrmLiteSqlStorage(StorageConfig.BackendEnum backend, String database, String username, String password) {
if (StringUtils.isBlank(backend)) { if (backend == null) {
throw new ConfigurationException("storage.backend"); throw new ConfigurationException("storage.backend");
} }
this.backend = backend;
if (StringUtils.isBlank(path)) { if (StringUtils.isBlank(database)) {
throw new ConfigurationException("Storage destination cannot be empty"); throw new ConfigurationException("Storage destination cannot be empty");
} }
withCatcher(() -> { withCatcher(() -> {
ConnectionSource connPool = new JdbcConnectionSource("jdbc:" + backend + ":" + path); ConnectionSource connPool = new JdbcConnectionSource("jdbc:" + backend + ":" + database, username, password);
changelogDao = createDaoAndTable(connPool, ChangelogDao.class);
invDao = createDaoAndTable(connPool, ThreePidInviteIO.class); invDao = createDaoAndTable(connPool, ThreePidInviteIO.class);
expInvDao = createDaoAndTable(connPool, HistoricalThreePidInviteIO.class); expInvDao = createDaoAndTable(connPool, HistoricalThreePidInviteIO.class);
sessionDao = createDaoAndTable(connPool, ThreePidSessionDao.class); sessionDao = createDaoAndTable(connPool, ThreePidSessionDao.class);
asTxnDao = createDaoAndTable(connPool, ASTransactionDao.class); asTxnDao = createDaoAndTable(connPool, ASTransactionDao.class);
accountDao = createDaoAndTable(connPool, AccountDao.class); accountDao = createDaoAndTable(connPool, AccountDao.class);
acceptedDao = createDaoAndTable(connPool, AcceptedDao.class); acceptedDao = createDaoAndTable(connPool, AcceptedDao.class, true);
hashDao = createDaoAndTable(connPool, HashDao.class); hashDao = createDaoAndTable(connPool, HashDao.class);
runMigration(connPool);
}); });
} }
private void runMigration(ConnectionSource connPol) throws SQLException {
ChangelogDao fixAcceptedDao = changelogDao.queryForId(Migrations.FIX_ACCEPTED_DAO);
if (fixAcceptedDao == null) {
fixAcceptedDao(connPol);
changelogDao.create(new ChangelogDao(Migrations.FIX_ACCEPTED_DAO, new Date(), "Recreate the accepted table."));
}
}
private void fixAcceptedDao(ConnectionSource connPool) throws SQLException {
LOGGER.info("Migration: {}", Migrations.FIX_ACCEPTED_DAO);
TableUtils.dropTable(acceptedDao, true);
TableUtils.createTableIfNotExists(connPool, AcceptedDao.class);
}
private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c) throws SQLException { private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c) throws SQLException {
return createDaoAndTable(connPool, c, false);
}
/**
* Workaround for https://github.com/j256/ormlite-core/issues/20.
*/
private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c, boolean workaround) throws SQLException {
LOGGER.info("Create the dao: {}", c.getSimpleName());
Dao<V, K> dao = DaoManager.createDao(connPool, c); Dao<V, K> dao = DaoManager.createDao(connPool, c);
if (workaround && StorageConfig.BackendEnum.postgresql.equals(backend)) {
LOGGER.info("Workaround for postgresql on dao: {}", c.getSimpleName());
try {
dao.countOf();
LOGGER.info("Table exists, do nothing");
} catch (SQLException e) {
LOGGER.info("Table doesn't exist, create");
TableUtils.createTableIfNotExists(connPool, c); TableUtils.createTableIfNotExists(connPool, c);
}
} else {
TableUtils.createTableIfNotExists(connPool, c);
}
return dao; return dao;
} }

View File

@@ -26,7 +26,10 @@ import com.j256.ormlite.table.DatabaseTable;
@DatabaseTable(tableName = "accepted") @DatabaseTable(tableName = "accepted")
public class AcceptedDao { public class AcceptedDao {
@DatabaseField(canBeNull = false, id = true) @DatabaseField(generatedId = true)
private Long id;
@DatabaseField(canBeNull = false)
private String url; private String url;
@DatabaseField(canBeNull = false) @DatabaseField(canBeNull = false)
@@ -45,6 +48,14 @@ public class AcceptedDao {
this.acceptedAt = acceptedAt; this.acceptedAt = acceptedAt;
} }
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getUrl() { public String getUrl() {
return url; return url;
} }

View File

@@ -0,0 +1,52 @@
package io.kamax.mxisd.storage.ormlite.dao;
import com.j256.ormlite.field.DatabaseField;
import com.j256.ormlite.table.DatabaseTable;
import java.util.Date;
@DatabaseTable(tableName = "changelog")
public class ChangelogDao {
@DatabaseField(id = true)
private String id;
@DatabaseField
private Date createdAt;
@DatabaseField
private String comment;
public ChangelogDao() {
}
public ChangelogDao(String id, Date createdAt, String comment) {
this.id = id;
this.createdAt = createdAt;
this.comment = comment;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public Date getCreatedAt() {
return createdAt;
}
public void setCreatedAt(Date createdAt) {
this.createdAt = createdAt;
}
public String getComment() {
return comment;
}
public void setComment(String comment) {
this.comment = comment;
}
}

View File

@@ -20,6 +20,7 @@
package io.kamax.mxisd.test.storage; package io.kamax.mxisd.test.storage;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.storage.ormlite.OrmLiteSqlStorage; import io.kamax.mxisd.storage.ormlite.OrmLiteSqlStorage;
import org.junit.Test; import org.junit.Test;
@@ -29,14 +30,14 @@ public class OrmLiteSqlStorageTest {
@Test @Test
public void insertAsTxnDuplicate() { public void insertAsTxnDuplicate() {
OrmLiteSqlStorage store = new OrmLiteSqlStorage("sqlite", ":memory:"); OrmLiteSqlStorage store = new OrmLiteSqlStorage(StorageConfig.BackendEnum.sqlite, ":memory:");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}"); store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
store.insertTransactionResult("mxisd", "2", Instant.now(), "{}"); store.insertTransactionResult("mxisd", "2", Instant.now(), "{}");
} }
@Test(expected = RuntimeException.class) @Test(expected = RuntimeException.class)
public void insertAsTxnSame() { public void insertAsTxnSame() {
OrmLiteSqlStorage store = new OrmLiteSqlStorage("sqlite", ":memory:"); OrmLiteSqlStorage store = new OrmLiteSqlStorage(StorageConfig.BackendEnum.sqlite, ":memory:");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}"); store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}"); store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
} }