Compare commits

...

31 Commits

Author SHA1 Message Date
Anatoliy Sablin
9c4faab5d8 Add option to log all requests and responses. 2020-05-06 23:46:34 +03:00
Anatoliy Sablin
53c4ffdc4e Add pooling database connection for postgresql. 2020-05-06 20:55:14 +03:00
Anatoliy Sablin
e4144e923a Add error logs. 2020-05-06 19:47:13 +03:00
Anatoliy Sablin
791361c10d Add the migration to fix column types in the postgresql. 2020-05-06 19:39:33 +03:00
Anatoliy Sablin
4b5eecd7e7 Enable v2 by default because Riot require v2 api. 2020-04-21 23:27:20 +03:00
Anatoliy Sablin
a6968fb7e9 Fix #27. 2020-04-07 22:46:14 +03:00
Anatoliy Sablin
d4853b1154 Add config for hostname. 2020-04-07 22:46:14 +03:00
ma1uta
89df4b2425 Merge pull request #33 from aaronraimist/patch-1
ma1sd implements r0.3.0 of the identity server API
2020-04-05 10:20:42 +00:00
Aaron Raimist
0f89121b98 ma1sd implements r0.3.0 of the identity server API 2020-04-04 17:16:25 -05:00
Anatoliy Sablin
8a40ca185b Fix #22. 2020-03-22 12:17:33 +03:00
Anatoliy Sablin
5baeb42623 Fix #29. 2020-03-22 12:12:47 +03:00
Anatoliy Sablin
072e5f66cb #26 Use empty pepper. 2020-02-19 23:35:59 +03:00
Anatoliy Sablin
b2f41d689b #26 fix. 2020-02-19 00:36:05 +03:00
Anatoly Sablin
9b4aff58c7 Add migration documentation. 2020-01-30 23:17:01 +03:00
Anatoly Sablin
a20e41574d Update docs. Add a new options and configuration. 2020-01-28 23:20:29 +03:00
Anatoly Sablin
72977d65ae Workaround for postgresql. 2020-01-28 23:18:39 +03:00
Anatoly Sablin
7555fff1a5 Add the postgresql backend for internal storage. 2020-01-28 22:15:26 +03:00
Anatoly Sablin
aed12e5536 Add the --dump-and-exit option to exit after printing the full configuration. 2020-01-28 01:02:43 +03:00
Anatoly Sablin
75efd9921d Improve logging configuration. Introduce the root and the app log levels. 2020-01-28 00:55:39 +03:00
Anatoly Sablin
9219bd4723 Add logging configuration. Add --dump option to just print the full configuration. 2020-01-25 14:57:22 +03:00
Anatoly Sablin
73526be2ac Add configuration to use the legacy query for old synapse to get room names. 2020-01-25 14:04:40 +03:00
ma1uta
b827efca2c Merge pull request #13 from NullIsNot0/fix-room-names-patch
Fix room name retrieval after Synapse dropped table room_names
2020-01-25 10:50:55 +00:00
NullIsNot0
6b7a4c8a23 Fix room name retrieval after Synapse dropped table room_names
Recently Synapse dropped unused (by Synapse itself) table "room_names" which brakes room name retrieval for ma1sd. There is a table "room_stats_state" from which we can retrieve room name by it's ID. Note that people to people conversations do not contain room names, because they are generated on-the-fly by setting other participants names separated by word "and". That's why this query will only get names for rooms where room names are set during creation process (or changed later) and are the same for all participants.
Link to Synapse code where it drops "room_names" table: https://github.com/matrix-org/synapse/blob/master/synapse/storage/data_stores/main/schema/delta/56/drop_unused_event_tables.sql#L17
2020-01-10 18:23:29 +02:00
Anatoly Sablin
47f6239268 Add equals and hashCode methods for the MemoryThreePid. 2020-01-09 22:28:44 +03:00
ma1uta
0d6f65b469 Merge pull request #11 from NullIsNot0/master
Load DNS overwrite config on startup and remove duplicates from identity store before email notifications
2020-01-09 19:25:13 +00:00
Edgars Voroboks
be915aed94 Remove duplicates from identity store before email notifications
I use LDAP for user store. I have set up "mail" and "otherMailbox" as threepid email attributes. When people get invited to rooms, they receive 2 (sometimes 3) invitation e-mails if they have the same e-mail address in LDAP "mail" and "otherMailbox" fields. I think it's a good idea to check identity store for duplicates before sending invitation e-mails.
2020-01-09 20:14:56 +02:00
NullIsNot0
ce938bb4a5 Load DNS overwrite config on startup
I recently noticed that DNS overwrite does not happen. There are messages in logs: "No DNS overwrite for <REDACTED>", but I definitely have configured DNS overwrithng. I think it's because DNS overwriting config is not loaded when ma1sd starts up.
Documented here: https://github.com/ma1uta/ma1sd/blob/master/docs/features/authentication.md#dns-overwrite and here: https://github.com/ma1uta/ma1sd/blob/master/docs/features/directory.md#dns-overwrite
2020-01-07 22:24:26 +02:00
Anatoly Sablin
15db563e8d Add documentation. 2019-12-26 22:49:25 +03:00
Anatoly Sablin
82a538c750 Add an option to enable/disable hash lookup via the LDAP provider. 2019-12-25 22:51:44 +03:00
Anatoly Sablin
84ca8ebbd9 Add support of the MSC2134 (Identity hash lookup) for the LDAP provider. 2019-12-25 00:13:07 +03:00
Anatoly Sablin
774ebf4fa8 Fix for #9. Proper wrap the handles with the sanitize handler. 2019-12-16 22:47:24 +03:00
50 changed files with 1238 additions and 153 deletions

View File

@@ -108,7 +108,7 @@ dependencies {
compile 'net.i2p.crypto:eddsa:0.3.0'
// LDAP connector
compile 'org.apache.directory.api:api-all:1.0.0'
compile 'org.apache.directory.api:api-all:1.0.3'
// DNS lookups
compile 'dnsjava:dnsjava:2.1.9'

View File

@@ -110,7 +110,7 @@ For sql provider (i.e. for the `synapseSql`):
```.yaml
synapseSql:
lookup:
query: 'select user_id as mxid, medium, address from user_threepids' # query for retrive 3PIDs for hashes.
query: 'select user_id as mxid, medium, address from user_threepid_id_server' # query for retrive 3PIDs for hashes.
```
For general sql provider:
@@ -136,5 +136,11 @@ exec:
hashEnabled: true # enable the hash lookup (defaults is false)
```
For ldap providers:
```.yaml
ldap:
lookup: true
```
NOTE: Federation requests work only with `none` algorithms.

View File

@@ -58,7 +58,53 @@ Commonly the `server.publicUrl` should be the same value as the `trusted_third_p
## Storage
### SQLite
`storage.provider.sqlite.database`: Absolute location of the SQLite database
```yaml
storage:
backend: sqlite # default
provider:
sqlite:
database: /var/lib/ma1sd/store.db # Absolute location of the SQLite database
```
### Postgresql
```yaml
storage:
backend: postgresql
provider:
postgresql:
database: //localhost:5432/ma1sd
username: ma1sd
password: secret_password
```
See [the migration instruction](migration-to-postgresql.md) from sqlite to postgresql
## Logging
```yaml
logging:
root: error # default level for all loggers (apps and thirdparty libraries)
app: info # log level only for the ma1sd
requests: false # log request and response
```
Possible value: `trace`, `debug`, `info`, `warn`, `error`, `off`.
Default value for root level: `info`.
Value for app level can be specified via `MA1SD_LOG_LEVEL` environment variable, configuration or start options.
Default value for app level: `info`.
| start option | equivalent configuration |
| --- | --- |
| | app: info |
| -v | app: debug |
| -vv | app: trace |
#### WARNING
The setting `logging.requests` *MUST NOT* be used in production due it prints full unmasked request and response into the log and can be cause of the data leak.
This setting can be used only to testing and debugging errors.
## Identity stores
See the [Identity stores](stores/README.md) for specific configuration

View File

@@ -1,5 +1,5 @@
# Identity
Implementation of the [Identity Service API r0.2.0](https://matrix.org/docs/spec/identity_service/r0.2.0.html).
Implementation of the [Identity Service API r0.3.0](https://matrix.org/docs/spec/identity_service/r0.3.0.html).
- [Lookups](#lookups)
- [Invitations](#invitations)

View File

@@ -0,0 +1,41 @@
# Migration from sqlite to postgresql
Starting from the version 2.3.0 ma1sd support postgresql for internal storage in addition to sqlite (parameters `storage.backend`).
#### Migration steps
1. create the postgresql database and user for ma1sd storage
2. create a backup for sqlite storage (default location: /var/lib/ma1sd/store.db)
3. migrate data from sqlite to postgresql
4. change ma1sd configuration to use the postgresql
For data migration is it possible to use https://pgloader.io tool.
Example of the migration command:
```shell script
pgloader --with "quote identifiers" /path/to/store.db pgsql://ma1sd_user:ma1sd_password@host:port/database
```
or (short version for database on localhost)
```shell script
pgloader --with "quote identifiers" /path/to/store.db pgsql://ma1sd_user:ma1sd_password@localhost/ma1sd
```
An option `--with "quote identifies"` used to create case sensitive tables.
ma1sd_user - postgresql user for ma1sd.
ma1sd_password - password of the postgresql user.
host - postgresql host
post - database port (default 5432)
database - database name.
Configuration example for postgresql storage:
```yaml
storage:
backend: postgresql
provider:
postgresql:
database: '//localhost/ma1sd' # or full variant //192.168.1.100:5432/ma1sd_database
username: 'ma1sd_user'
password: 'ma1sd_password'
```

View File

@@ -89,7 +89,7 @@ ldap:
#### 3PIDs
You can also change the attribute lists for 3PID, like email or phone numbers.
The following example would overwrite the [default list of attributes](../../src/main/java/io/kamax/ma1sd/config/ldap/LdapConfig.java#L64)
The following example would overwrite the [default list of attributes](../../src/main/java/io/kamax/mxisd/config/ldap/LdapConfig.java#L64)
for emails and phone number:
```yaml
ldap:

View File

@@ -136,7 +136,7 @@ sql:
```
For the `role` query, `type` can be used to tell ma1sd how to inject the User ID in the query:
- `localpart` will extract and set only the localpart.
- `uid` will extract and set only the localpart.
- `mxid` will use the ID as-is.
On each query, the first parameter `?` is set as a string with the corresponding ID format.

View File

@@ -22,7 +22,7 @@
matrix:
domain: ''
v1: true # deprecated
v2: false # MSC2140 API v2. Disabled by default in order to preserve backward compatibility.
v2: true # MSC2140 API v2. Riot require enabled V2 API.
################
@@ -51,10 +51,39 @@ key:
# - /var/lib/ma1sd/store.db
#
storage:
# backend: sqlite # or postgresql
provider:
sqlite:
database: '/path/to/ma1sd.db'
# postgresql:
# # Wrap all string values with quotes to avoid yaml parsing mistakes
# database: '//localhost/ma1sd' # or full variant //192.168.1.100:5432/ma1sd_database
# username: 'ma1sd_user'
# password: 'ma1sd_password'
#
# # Pool configuration for postgresql backend.
# #######
# # Enable or disable pooling
# pool: false
#
# #######
# # Check database connection before get from pool
# testBeforeGetFromPool: false # or true
#
# #######
# # There is an internal thread which checks each of the database connections as a keep-alive mechanism. This set the
# # number of milliseconds it sleeps between checks -- default is 30000. To disable the checking thread, set this to
# # 0 before you start using the connection source.
# checkConnectionsEveryMillis: 30000
#
# #######
# # Set the number of connections that can be unused in the available list.
# maxConnectionsFree: 5
#
# #######
# # Set the number of milliseconds that a connection can stay open before being closed. Set to 9223372036854775807 to have
# # the connections never expire.
# maxConnectionAgeMillis: 3600000
###################
# Identity Stores #
@@ -129,7 +158,27 @@ threepid:
### hash lookup for synapseSql provider.
# synapseSql:
# lookup:
# query: 'select user_id as mxid, medium, address from user_threepids' # query for retrive 3PIDs for hashes.
# query: 'select user_id as mxid, medium, address from user_threepid_id_server' # query for retrive 3PIDs for hashes.
# legacyRoomNames: false # use the old query to get room names.
### hash lookup for ldap provider (with example of the ldap configuration)
# ldap:
# enabled: true
# lookup: true # hash lookup
# connection:
# host: 'ldap.domain.tld'
# port: 389
# bindDn: 'cn=admin,dc=domain,dc=tld'
# bindPassword: 'Secret'
# baseDNs:
# - 'dc=domain,dc=tld'
# attribute:
# uid:
# type: 'uid' # or mxid
# value: 'cn'
# name: 'displayName'
# identity:
# filter: '(objectClass=inetOrgPerson)'
#### MSC2140 (Terms)
#policy:
@@ -148,3 +197,8 @@ threepid:
# - '/_matrix/identity/v2/hash_details'
# - '/_matrix/identity/v2/lookup'
#
# logging:
# root: error # default level for all loggers (apps and thirdparty libraries)
# app: info # log level only for the ma1sd
# requests: false # or true to dump full requests and responses

View File

@@ -23,11 +23,13 @@ package io.kamax.mxisd;
import io.kamax.mxisd.config.MatrixConfig;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.PolicyConfig;
import io.kamax.mxisd.config.ServerConfig;
import io.kamax.mxisd.http.undertow.handler.ApiHandler;
import io.kamax.mxisd.http.undertow.handler.AuthorizationHandler;
import io.kamax.mxisd.http.undertow.handler.CheckTermsHandler;
import io.kamax.mxisd.http.undertow.handler.InternalInfoHandler;
import io.kamax.mxisd.http.undertow.handler.OptionsHandler;
import io.kamax.mxisd.http.undertow.handler.RequestDumpingHandler;
import io.kamax.mxisd.http.undertow.handler.SaneHandler;
import io.kamax.mxisd.http.undertow.handler.as.v1.AsNotFoundHandler;
import io.kamax.mxisd.http.undertow.handler.as.v1.AsTransactionHandler;
@@ -99,35 +101,35 @@ public class HttpMxisd {
public void start() {
m.start();
HttpHandler asUserHandler = SaneHandler.around(new AsUserHandler(m.getAs()));
HttpHandler asTxnHandler = SaneHandler.around(new AsTransactionHandler(m.getAs()));
HttpHandler asNotFoundHandler = SaneHandler.around(new AsNotFoundHandler(m.getAs()));
HttpHandler asUserHandler = sane(new AsUserHandler(m.getAs()));
HttpHandler asTxnHandler = sane(new AsTransactionHandler(m.getAs()));
HttpHandler asNotFoundHandler = sane(new AsNotFoundHandler(m.getAs()));
final RoutingHandler handler = Handlers.routing()
.add("OPTIONS", "/**", SaneHandler.around(new OptionsHandler()))
.add("OPTIONS", "/**", sane(new OptionsHandler()))
// Status endpoints
.get(StatusHandler.Path, SaneHandler.around(new StatusHandler()))
.get(VersionHandler.Path, SaneHandler.around(new VersionHandler()))
.get(StatusHandler.Path, sane(new StatusHandler()))
.get(VersionHandler.Path, sane(new VersionHandler()))
// Authentication endpoints
.get(LoginHandler.Path, SaneHandler.around(new LoginGetHandler(m.getAuth(), m.getHttpClient())))
.post(LoginHandler.Path, SaneHandler.around(new LoginPostHandler(m.getAuth())))
.post(RestAuthHandler.Path, SaneHandler.around(new RestAuthHandler(m.getAuth())))
.get(LoginHandler.Path, sane(new LoginGetHandler(m.getAuth(), m.getHttpClient())))
.post(LoginHandler.Path, sane(new LoginPostHandler(m.getAuth())))
.post(RestAuthHandler.Path, sane(new RestAuthHandler(m.getAuth())))
// Directory endpoints
.post(UserDirectorySearchHandler.Path, SaneHandler.around(new UserDirectorySearchHandler(m.getDirectory())))
.post(UserDirectorySearchHandler.Path, sane(new UserDirectorySearchHandler(m.getDirectory())))
// Profile endpoints
.get(ProfileHandler.Path, SaneHandler.around(new ProfileHandler(m.getProfile())))
.get(InternalProfileHandler.Path, SaneHandler.around(new InternalProfileHandler(m.getProfile())))
.get(ProfileHandler.Path, sane(new ProfileHandler(m.getProfile())))
.get(InternalProfileHandler.Path, sane(new InternalProfileHandler(m.getProfile())))
// Registration endpoints
.post(Register3pidRequestTokenHandler.Path,
SaneHandler.around(new Register3pidRequestTokenHandler(m.getReg(), m.getClientDns(), m.getHttpClient())))
sane(new Register3pidRequestTokenHandler(m.getReg(), m.getClientDns(), m.getHttpClient())))
// Invite endpoints
.post(RoomInviteHandler.Path, SaneHandler.around(new RoomInviteHandler(m.getHttpClient(), m.getClientDns(), m.getInvite())))
.post(RoomInviteHandler.Path, sane(new RoomInviteHandler(m.getHttpClient(), m.getClientDns(), m.getInvite())))
// Application Service endpoints
.get(AsUserHandler.Path, asUserHandler)
@@ -139,13 +141,14 @@ public class HttpMxisd {
.put("/transactions/{" + AsTransactionHandler.ID + "}", asTxnHandler) // Legacy endpoint
// Banned endpoints
.get(InternalInfoHandler.Path, SaneHandler.around(new InternalInfoHandler()));
.get(InternalInfoHandler.Path, sane(new InternalInfoHandler()));
keyEndpoints(handler);
identityEndpoints(handler);
termsEndpoints(handler);
hashEndpoints(handler);
accountEndpoints(handler);
httpSrv = Undertow.builder().addHttpListener(m.getConfig().getServer().getPort(), "0.0.0.0").setHandler(handler).build();
ServerConfig serverConfig = m.getConfig().getServer();
httpSrv = Undertow.builder().addHttpListener(serverConfig.getPort(), serverConfig.getHostname()).setHandler(handler).build();
httpSrv.start();
}
@@ -191,10 +194,10 @@ public class HttpMxisd {
private void accountEndpoints(RoutingHandler routingHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
routingHandler.post(AccountRegisterHandler.Path, SaneHandler.around(new AccountRegisterHandler(m.getAccMgr())));
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new AccountGetUserInfoHandler(m.getAccMgr())),
routingHandler.post(AccountRegisterHandler.Path, sane(new AccountRegisterHandler(m.getAccMgr())));
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new AccountGetUserInfoHandler(m.getAccMgr()),
AccountGetUserInfoHandler.Path, true);
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new AccountLogoutHandler(m.getAccMgr())),
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new AccountLogoutHandler(m.getAccMgr()),
AccountLogoutHandler.Path, true);
}
}
@@ -202,7 +205,7 @@ public class HttpMxisd {
private void termsEndpoints(RoutingHandler routingHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
routingHandler.get(GetTermsHandler.PATH, new GetTermsHandler(m.getConfig().getPolicy()));
routingHandler.get(GetTermsHandler.PATH, sane(new GetTermsHandler(m.getConfig().getPolicy())));
routingHandler.post(AcceptTermsHandler.PATH, sane(new AcceptTermsHandler(m.getAccMgr())));
}
}
@@ -210,10 +213,10 @@ public class HttpMxisd {
private void hashEndpoints(RoutingHandler routingHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new HashDetailsHandler(m.getHashManager())),
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new HashDetailsHandler(m.getHashManager()),
HashDetailsHandler.PATH, true);
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.POST,
sane(new HashLookupHandler(m.getIdentity(), m.getHashManager())), HashLookupHandler.Path, true);
new HashLookupHandler(m.getIdentity(), m.getHashManager()), HashLookupHandler.Path, true);
}
}
@@ -227,11 +230,11 @@ public class HttpMxisd {
HttpHandler httpHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV1()) {
routingHandler.add(method, apiHandler.getPath(IdentityServiceAPI.V1), httpHandler);
routingHandler.add(method, apiHandler.getPath(IdentityServiceAPI.V1), sane(httpHandler));
}
if (matrixConfig.isV2()) {
String path = apiHandler.getPath(IdentityServiceAPI.V2);
wrapWithTokenAndAuthorizationHandlers(routingHandler, method, httpHandler, path, useAuthorization);
wrapWithTokenAndAuthorizationHandlers(routingHandler, method, httpHandler, apiHandler.getPath(IdentityServiceAPI.V2),
useAuthorization);
}
}
@@ -245,7 +248,7 @@ public class HttpMxisd {
} else {
wrappedHandler = httpHandler;
}
routingHandler.add(method, url, wrappedHandler);
routingHandler.add(method, url, sane(wrappedHandler));
}
@NotNull
@@ -265,6 +268,11 @@ public class HttpMxisd {
}
private HttpHandler sane(HttpHandler httpHandler) {
return SaneHandler.around(httpHandler);
SaneHandler handler = SaneHandler.around(httpHandler);
if (m.getConfig().getLogging().isRequests()) {
return new RequestDumpingHandler(handler);
} else {
return handler;
}
}
}

View File

@@ -27,6 +27,7 @@ import io.kamax.mxisd.auth.AuthProviders;
import io.kamax.mxisd.backend.IdentityStoreSupplier;
import io.kamax.mxisd.backend.sql.synapse.Synapse;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.crypto.CryptoFactory;
import io.kamax.mxisd.crypto.KeyManager;
import io.kamax.mxisd.crypto.SignatureManager;
@@ -66,7 +67,7 @@ public class Mxisd {
public static final String Version = StringUtils.defaultIfBlank(Mxisd.class.getPackage().getImplementationVersion(), "UNKNOWN");
public static final String Agent = Name + "/" + Version;
private MxisdConfig cfg;
private final MxisdConfig cfg;
private CloseableHttpClient httpClient;
private IRemoteIdentityServerFetcher srvFetcher;
@@ -109,7 +110,10 @@ public class Mxisd {
IdentityServerUtils.setHttpClient(httpClient);
srvFetcher = new RemoteIdentityServerFetcher(httpClient);
store = new OrmLiteSqlStorage(cfg);
StorageConfig.BackendEnum storageBackend = cfg.getStorage().getBackend();
StorageConfig.Provider storageProvider = cfg.getStorage().getProvider();
store = new OrmLiteSqlStorage(storageBackend, storageProvider);
keyMgr = CryptoFactory.getKeyManager(cfg.getKey());
signMgr = CryptoFactory.getSignatureManager(cfg, keyMgr);
clientDns = new ClientDnsOverwrite(cfg.getDns().getOverwrite());

View File

@@ -44,31 +44,46 @@ public class MxisdStandaloneExec {
try {
MxisdConfig cfg = null;
Iterator<String> argsIt = Arrays.asList(args).iterator();
boolean dump = false;
boolean exit = false;
while (argsIt.hasNext()) {
String arg = argsIt.next();
if (StringUtils.equalsAny(arg, "-h", "--help", "-?", "--usage")) {
System.out.println("Available arguments:" + System.lineSeparator());
System.out.println(" -h, --help Show this help message");
System.out.println(" --version Print the version then exit");
System.out.println(" -c, --config Set the configuration file location");
System.out.println(" -v Increase log level (log more info)");
System.out.println(" -vv Further increase log level");
System.out.println(" ");
System.exit(0);
} else if (StringUtils.equals(arg, "-v")) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "debug");
} else if (StringUtils.equals(arg, "-vv")) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "trace");
} else if (StringUtils.equalsAny(arg, "-c", "--config")) {
String cfgFile = argsIt.next();
cfg = YamlConfigLoader.loadFromFile(cfgFile);
} else if (StringUtils.equals("--version", arg)) {
System.out.println(Mxisd.Version);
System.exit(0);
} else {
System.err.println("Invalid argument: " + arg);
System.err.println("Try '--help' for available arguments");
System.exit(1);
switch (arg) {
case "-h":
case "--help":
case "-?":
case "--usage":
System.out.println("Available arguments:" + System.lineSeparator());
System.out.println(" -h, --help Show this help message");
System.out.println(" --version Print the version then exit");
System.out.println(" -c, --config Set the configuration file location");
System.out.println(" -v Increase log level (log more info)");
System.out.println(" -vv Further increase log level");
System.out.println(" --dump Dump the full ma1sd configuration");
System.out.println(" --dump-and-exit Dump the full ma1sd configuration and exit");
System.out.println(" ");
System.exit(0);
return;
case "-v":
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "debug");
break;
case "-vv":
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "trace");
break;
case "-c":
case "--config":
String cfgFile = argsIt.next();
cfg = YamlConfigLoader.loadFromFile(cfgFile);
break;
case "--dump-and-exit":
exit = true;
case "--dump":
dump = true;
break;
default:
System.err.println("Invalid argument: " + arg);
System.err.println("Try '--help' for available arguments");
System.exit(1);
}
}
@@ -76,6 +91,13 @@ public class MxisdStandaloneExec {
cfg = YamlConfigLoader.tryLoadFromFile("ma1sd.yaml").orElseGet(MxisdConfig::new);
}
if (dump) {
YamlConfigLoader.dumpConfig(cfg);
if (exit) {
System.exit(0);
}
}
log.info("ma1sd starting");
log.info("Version: {}", Mxisd.Version);

View File

@@ -144,7 +144,13 @@ public class MembershipEventProcessor implements EventTypeProcessor {
.collect(Collectors.toList());
log.info("Found {} email(s) in identity store for {}", tpids.size(), inviteeId);
for (_ThreePid tpid : tpids) {
log.info("Removing duplicates from identity store");
List<_ThreePid> uniqueTpids = tpids.stream()
.distinct()
.collect(Collectors.toList());
log.info("There are {} unique email(s) in identity store for {}", uniqueTpids.size(), inviteeId);
for (_ThreePid tpid : uniqueTpids) {
log.info("Found Email to notify about room invitation: {}", tpid.getAddress());
Map<String, String> properties = new HashMap<>();
profiler.getDisplayName(sender).ifPresent(name -> properties.put("sender_display_name", name));

View File

@@ -162,6 +162,7 @@ public class LdapAuthProvider extends LdapBackend implements AuthenticatorProvid
log.info("No match were found for {}", mxid);
return BackendAuthResult.failure();
} catch (LdapException | IOException | CursorException e) {
log.error("Unable to invoke query request: ", e);
throw new InternalServerError(e);
}
}

View File

@@ -41,7 +41,10 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
public class LdapThreePidProvider extends LdapBackend implements IThreePidProvider {
@@ -137,4 +140,65 @@ public class LdapThreePidProvider extends LdapBackend implements IThreePidProvid
return mappingsFound;
}
private List<String> getAttributes() {
final List<String> attributes = getCfg().getAttribute().getThreepid().values().stream().flatMap(List::stream)
.collect(Collectors.toList());
attributes.add(getUidAtt());
return attributes;
}
private Optional<String> getAttributeValue(Entry entry, List<String> attributes) {
return attributes.stream()
.map(attr -> getAttribute(entry, attr))
.filter(Objects::nonNull)
.filter(Optional::isPresent)
.map(Optional::get)
.findFirst();
}
@Override
public Iterable<ThreePidMapping> populateHashes() {
List<ThreePidMapping> result = new ArrayList<>();
if (!getCfg().getIdentity().isLookup()) {
return result;
}
String filter = getCfg().getIdentity().getFilter();
try (LdapConnection conn = getConn()) {
bind(conn);
log.debug("Query: {}", filter);
List<String> attributes = getAttributes();
log.debug("Attributes: {}", GsonUtil.build().toJson(attributes));
for (String baseDN : getBaseDNs()) {
log.debug("Base DN: {}", baseDN);
try (EntryCursor cursor = conn.search(baseDN, filter, SearchScope.SUBTREE, attributes.toArray(new String[0]))) {
while (cursor.next()) {
Entry entry = cursor.get();
log.info("Found possible match, DN: {}", entry.getDn().getName());
Optional<String> mxid = getAttribute(entry, getUidAtt());
if (!mxid.isPresent()) {
continue;
}
for (Map.Entry<String, List<String>> attributeEntry : getCfg().getAttribute().getThreepid().entrySet()) {
String medium = attributeEntry.getKey();
getAttributeValue(entry, attributeEntry.getValue())
.ifPresent(s -> result.add(new ThreePidMapping(medium, s, buildMatrixIdFromUid(mxid.get()))));
}
}
} catch (CursorLdapReferralException e) {
log.warn("3PID is only available via referral, skipping", e);
} catch (IOException | LdapException | CursorException e) {
log.error("Unable to fetch 3PID mappings", e);
}
}
} catch (LdapException | IOException e) {
log.error("Unable to fetch 3PID mappings", e);
}
return result;
}
}

View File

@@ -107,14 +107,16 @@ public abstract class SqlThreePidProvider implements IThreePidProvider {
@Override
public Iterable<ThreePidMapping> populateHashes() {
if (StringUtils.isBlank(cfg.getLookup().getQuery())) {
String query = cfg.getLookup().getQuery();
if (StringUtils.isBlank(query)) {
log.warn("Lookup query not configured, skip.");
return Collections.emptyList();
}
log.debug("Uses query to match users: {}", query);
List<ThreePidMapping> result = new ArrayList<>();
try (Connection connection = pool.get()) {
PreparedStatement statement = connection.prepareStatement(cfg.getLookup().getQuery());
PreparedStatement statement = connection.prepareStatement(query);
try (ResultSet resultSet = statement.executeQuery()) {
while (resultSet.next()) {
String mxid = resultSet.getString("mxid");

View File

@@ -29,23 +29,27 @@ import java.util.Optional;
public class Synapse {
private SqlConnectionPool pool;
private final SqlConnectionPool pool;
private final SynapseSqlProviderConfig providerConfig;
public Synapse(SynapseSqlProviderConfig sqlCfg) {
this.pool = new SqlConnectionPool(sqlCfg);
providerConfig = sqlCfg;
}
public Optional<String> getRoomName(String id) {
return pool.withConnFunction(conn -> {
PreparedStatement stmt = conn.prepareStatement(SynapseQueries.getRoomName());
stmt.setString(1, id);
ResultSet rSet = stmt.executeQuery();
if (!rSet.next()) {
return Optional.empty();
}
String query = providerConfig.isLegacyRoomNames() ? SynapseQueries.getLegacyRoomName() : SynapseQueries.getRoomName();
return Optional.ofNullable(rSet.getString(1));
return pool.withConnFunction(conn -> {
try (PreparedStatement stmt = conn.prepareStatement(query)) {
stmt.setString(1, id);
ResultSet rSet = stmt.executeQuery();
if (!rSet.next()) {
return Optional.empty();
}
return Optional.ofNullable(rSet.getString(1));
}
});
}
}

View File

@@ -72,7 +72,10 @@ public class SynapseQueries {
}
public static String getRoomName() {
return "select r.name from room_names r, events e, (select r1.room_id,max(e1.origin_server_ts) ts from room_names r1, events e1 where r1.event_id = e1.event_id group by r1.room_id) rle where e.origin_server_ts = rle.ts and r.event_id = e.event_id and r.room_id = ?";
return "select name from room_stats_state where room_id = ? limit 1";
}
public static String getLegacyRoomName() {
return "select r.name from room_names r, events e, (select r1.room_id,max(e1.origin_server_ts) ts from room_names r1, events e1 where r1.event_id = e1.event_id group by r1.room_id) rle where e.origin_server_ts = rle.ts and r.event_id = e.event_id and r.room_id = ?";
}
}

View File

@@ -0,0 +1,5 @@
package io.kamax.mxisd.config;
public interface DatabaseStorageConfig {
String getDatabase();
}

View File

@@ -19,7 +19,7 @@ public class HashingConfig {
private int requests = 10;
private List<Algorithm> algorithms = new ArrayList<>();
public void build() {
public void build(MatrixConfig matrixConfig) {
if (isEnabled()) {
LOGGER.info("--- Hash configuration ---");
LOGGER.info(" Pepper length: {}", getPepperLength());
@@ -35,6 +35,9 @@ public class HashingConfig {
}
LOGGER.info(" Algorithms: {}", getAlgorithms());
} else {
if (matrixConfig.isV2()) {
LOGGER.warn("V2 enabled without the hash configuration.");
}
LOGGER.info("Hash configuration disabled, used only `none` pepper.");
}
}

View File

@@ -0,0 +1,60 @@
package io.kamax.mxisd.config;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class LoggingConfig {
private static final Logger LOGGER = LoggerFactory.getLogger("App");
private String root;
private String app;
private boolean requests = false;
public String getRoot() {
return root;
}
public void setRoot(String root) {
this.root = root;
}
public String getApp() {
return app;
}
public void setApp(String app) {
this.app = app;
}
public boolean isRequests() {
return requests;
}
public void setRequests(boolean requests) {
this.requests = requests;
}
public void build() {
LOGGER.info("Logging config:");
if (StringUtils.isNotBlank(getRoot())) {
LOGGER.info(" Default log level: {}", getRoot());
System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", getRoot());
}
String appLevel = System.getProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd");
if (StringUtils.isNotBlank(appLevel)) {
LOGGER.info(" Logging level set by environment: {}", appLevel);
} else if (StringUtils.isNotBlank(getApp())) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", getApp());
LOGGER.info(" Logging level set by the configuration: {}", getApp());
} else {
LOGGER.info(" Logging level hasn't set, use default");
}
LOGGER.info(" Log requests: {}", isRequests());
if (isRequests()) {
LOGGER.warn(" Request dumping enabled, use this only to debug purposes, don't use it in the production.");
}
}
}

View File

@@ -117,6 +117,7 @@ public class MxisdConfig {
private WordpressConfig wordpress = new WordpressConfig();
private PolicyConfig policy = new PolicyConfig();
private HashingConfig hashing = new HashingConfig();
private LoggingConfig logging = new LoggingConfig();
public AppServiceConfig getAppsvc() {
return appsvc;
@@ -342,6 +343,14 @@ public class MxisdConfig {
this.hashing = hashing;
}
public LoggingConfig getLogging() {
return logging;
}
public void setLogging(LoggingConfig logging) {
this.logging = logging;
}
public MxisdConfig inMemory() {
getKey().setPath(":memory:");
getStorage().getProvider().getSqlite().setDatabase(":memory:");
@@ -350,6 +359,8 @@ public class MxisdConfig {
}
public MxisdConfig build() {
getLogging().build();
if (StringUtils.isBlank(getServer().getName())) {
getServer().setName(getMatrix().getDomain());
log.debug("server.name is empty, using matrix.domain");
@@ -359,6 +370,7 @@ public class MxisdConfig {
getAuth().build();
getAccountConfig().build();
getDirectory().build();
getDns().build();
getExec().build();
getFirebase().build();
getForward().build();
@@ -381,7 +393,7 @@ public class MxisdConfig {
getView().build();
getWordpress().build();
getPolicy().build();
getHashing().build();
getHashing().build(getMatrix());
return this;
}

View File

@@ -0,0 +1,105 @@
/*
* mxisd - Matrix Identity Server Daemon
* Copyright (C) 2017 Kamax Sarl
*
* https://www.kamax.io/
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package io.kamax.mxisd.config;
public class PostgresqlStorageConfig implements DatabaseStorageConfig {
private String database;
private String username;
private String password;
private boolean pool;
private int maxConnectionsFree = 1;
private long maxConnectionAgeMillis = 60 * 60 * 1000;
private long checkConnectionsEveryMillis = 30 * 1000;
private boolean testBeforeGetFromPool = false;
@Override
public String getDatabase() {
return database;
}
public void setDatabase(String database) {
this.database = database;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public boolean isPool() {
return pool;
}
public void setPool(boolean pool) {
this.pool = pool;
}
public int getMaxConnectionsFree() {
return maxConnectionsFree;
}
public void setMaxConnectionsFree(int maxConnectionsFree) {
this.maxConnectionsFree = maxConnectionsFree;
}
public long getMaxConnectionAgeMillis() {
return maxConnectionAgeMillis;
}
public void setMaxConnectionAgeMillis(long maxConnectionAgeMillis) {
this.maxConnectionAgeMillis = maxConnectionAgeMillis;
}
public long getCheckConnectionsEveryMillis() {
return checkConnectionsEveryMillis;
}
public void setCheckConnectionsEveryMillis(long checkConnectionsEveryMillis) {
this.checkConnectionsEveryMillis = checkConnectionsEveryMillis;
}
public boolean isTestBeforeGetFromPool() {
return testBeforeGetFromPool;
}
public void setTestBeforeGetFromPool(boolean testBeforeGetFromPool) {
this.testBeforeGetFromPool = testBeforeGetFromPool;
}
}

View File

@@ -20,10 +20,11 @@
package io.kamax.mxisd.config;
public class SQLiteStorageConfig {
public class SQLiteStorageConfig implements DatabaseStorageConfig {
private String database;
@Override
public String getDatabase() {
return database;
}

View File

@@ -34,6 +34,7 @@ public class ServerConfig {
private String name;
private int port = 8090;
private String publicUrl;
private String hostname;
public String getName() {
return name;
@@ -59,6 +60,14 @@ public class ServerConfig {
this.publicUrl = publicUrl;
}
public String getHostname() {
return hostname;
}
public void setHostname(String hostname) {
this.hostname = hostname;
}
public void build() {
log.info("--- Server config ---");
@@ -75,8 +84,13 @@ public class ServerConfig {
log.warn("Public URL is not valid: {}", StringUtils.defaultIfBlank(e.getMessage(), "<no reason provided>"));
}
if (StringUtils.isBlank(getHostname())) {
setHostname("0.0.0.0");
}
log.info("Name: {}", getName());
log.info("Port: {}", getPort());
log.info("Public URL: {}", getPublicUrl());
log.info("Hostname: {}", getHostname());
}
}

View File

@@ -21,14 +21,21 @@
package io.kamax.mxisd.config;
import io.kamax.mxisd.exception.ConfigurationException;
import org.apache.commons.lang.StringUtils;
public class StorageConfig {
public enum BackendEnum {
sqlite,
postgresql
}
public static class Provider {
private SQLiteStorageConfig sqlite = new SQLiteStorageConfig();
private PostgresqlStorageConfig postgresql = new PostgresqlStorageConfig();
public SQLiteStorageConfig getSqlite() {
return sqlite;
}
@@ -37,16 +44,23 @@ public class StorageConfig {
this.sqlite = sqlite;
}
public PostgresqlStorageConfig getPostgresql() {
return postgresql;
}
public void setPostgresql(PostgresqlStorageConfig postgresql) {
this.postgresql = postgresql;
}
}
private String backend = "sqlite";
private BackendEnum backend = BackendEnum.sqlite; // or postgresql
private Provider provider = new Provider();
public String getBackend() {
public BackendEnum getBackend() {
return backend;
}
public void setBackend(String backend) {
public void setBackend(BackendEnum backend) {
this.backend = backend;
}
@@ -59,7 +73,7 @@ public class StorageConfig {
}
public void build() {
if (StringUtils.isBlank(getBackend())) {
if (getBackend() == null) {
throw new ConfigurationException("storage.backend");
}
}

View File

@@ -76,4 +76,14 @@ public class YamlConfigLoader {
}
}
public static void dumpConfig(MxisdConfig cfg) {
Representer rep = new Representer();
rep.getPropertyUtils().setBeanAccess(BeanAccess.FIELD);
rep.getPropertyUtils().setAllowReadOnlyProperties(true);
rep.getPropertyUtils().setSkipMissingProperties(true);
Yaml yaml = new Yaml(new Constructor(MxisdConfig.class), rep);
String dump = yaml.dump(cfg);
log.info("Full configuration:\n{}", dump);
}
}

View File

@@ -233,6 +233,7 @@ public abstract class LdapConfig {
private String filter;
private String token = "%3pid";
private Map<String, String> medium = new HashMap<>();
private boolean lookup = false;
public String getFilter() {
return filter;
@@ -262,6 +263,13 @@ public abstract class LdapConfig {
this.medium = medium;
}
public boolean isLookup() {
return lookup;
}
public void setLookup(boolean lookup) {
this.lookup = lookup;
}
}
public static class Profile {

View File

@@ -45,4 +45,21 @@ public class MemoryThreePid implements _ThreePid {
this.address = address;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
MemoryThreePid threePid = (MemoryThreePid) o;
if (!medium.equals(threePid.medium)) return false;
return address.equals(threePid.address);
}
@Override
public int hashCode() {
int result = medium.hashCode();
result = 31 * result + address.hashCode();
return result;
}
}

View File

@@ -125,7 +125,7 @@ public abstract class SqlConfig {
}
public static class Lookup {
private String query = "SELECT user_id AS mxid, medium, address from user_threepids";
private String query = "SELECT user_id AS mxid, medium, address from user_threepid_id_server";
public String getQuery() {
return query;
@@ -140,7 +140,7 @@ public abstract class SqlConfig {
private Boolean enabled;
private String type = "mxid";
private String query = "SELECT user_id AS uid FROM user_threepids WHERE medium = ? AND address = ?";
private String query = "SELECT user_id AS uid FROM user_threepid_id_server WHERE medium = ? AND address = ?";
private Map<String, String> medium = new HashMap<>();
public Boolean isEnabled() {

View File

@@ -24,9 +24,23 @@ import io.kamax.mxisd.UserIdType;
import io.kamax.mxisd.backend.sql.synapse.SynapseQueries;
import io.kamax.mxisd.config.sql.SqlConfig;
import org.apache.commons.lang.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class SynapseSqlProviderConfig extends SqlConfig {
private transient final Logger log = LoggerFactory.getLogger(SynapseSqlProviderConfig.class);
private boolean legacyRoomNames = false;
public boolean isLegacyRoomNames() {
return legacyRoomNames;
}
public void setLegacyRoomNames(boolean legacyRoomNames) {
this.legacyRoomNames = legacyRoomNames;
}
@Override
protected String getProviderName() {
return "Synapse SQL";
@@ -42,7 +56,7 @@ public class SynapseSqlProviderConfig extends SqlConfig {
if (getIdentity().isEnabled() && StringUtils.isBlank(getIdentity().getType())) {
getIdentity().setType("mxid");
getIdentity().setQuery("SELECT user_id AS uid FROM user_threepids WHERE medium = ? AND address = ?");
getIdentity().setQuery("SELECT user_id AS uid FROM user_threepid_id_server WHERE medium = ? AND address = ?");
}
if (getProfile().isEnabled()) {
@@ -65,4 +79,12 @@ public class SynapseSqlProviderConfig extends SqlConfig {
printConfig();
}
@Override
protected void printConfig() {
super.printConfig();
if (isEnabled()) {
log.info("Use legacy room name query: {}", isLegacyRoomNames());
}
}
}

View File

@@ -1,6 +1,9 @@
package io.kamax.mxisd.hash;
import io.kamax.mxisd.config.HashingConfig;
import io.kamax.mxisd.hash.engine.Engine;
import io.kamax.mxisd.hash.engine.HashEngine;
import io.kamax.mxisd.hash.engine.NoneEngine;
import io.kamax.mxisd.hash.rotation.HashRotationStrategy;
import io.kamax.mxisd.hash.rotation.NoOpRotationStrategy;
import io.kamax.mxisd.hash.rotation.RotationPerRequests;
@@ -21,7 +24,7 @@ public class HashManager {
private static final Logger LOGGER = LoggerFactory.getLogger(HashManager.class);
private HashEngine hashEngine;
private Engine engine;
private HashRotationStrategy rotationStrategy;
private HashStorage hashStorage;
private HashingConfig config;
@@ -32,7 +35,7 @@ public class HashManager {
this.config = config;
this.storage = storage;
initStorage();
hashEngine = new HashEngine(providers, getHashStorage(), config);
engine = config.isEnabled() ? new HashEngine(providers, getHashStorage(), config) : new NoneEngine();
initRotationStrategy();
configured.set(true);
}
@@ -73,8 +76,8 @@ public class HashManager {
this.rotationStrategy.register(getHashEngine());
}
public HashEngine getHashEngine() {
return hashEngine;
public Engine getHashEngine() {
return engine;
}
public HashRotationStrategy getRotationStrategy() {

View File

@@ -0,0 +1,7 @@
package io.kamax.mxisd.hash.engine;
public interface Engine {
void updateHashes();
String getPepper();
}

View File

@@ -1,4 +1,4 @@
package io.kamax.mxisd.hash;
package io.kamax.mxisd.hash.engine;
import io.kamax.mxisd.config.HashingConfig;
import io.kamax.mxisd.hash.storage.HashStorage;
@@ -12,7 +12,7 @@ import org.slf4j.LoggerFactory;
import java.util.Base64;
import java.util.List;
public class HashEngine {
public class HashEngine implements Engine {
private static final Logger LOGGER = LoggerFactory.getLogger(HashEngine.class);
@@ -28,6 +28,7 @@ public class HashEngine {
this.config = config;
}
@Override
public void updateHashes() {
LOGGER.info("Start update hashes.");
synchronized (hashStorage) {
@@ -35,7 +36,9 @@ public class HashEngine {
hashStorage.clear();
for (IThreePidProvider provider : providers) {
try {
LOGGER.info("Populate hashes from the handler: {}", provider.getClass().getCanonicalName());
for (ThreePidMapping pidMapping : provider.populateHashes()) {
LOGGER.debug("Found 3PID: {}", pidMapping);
hashStorage.add(pidMapping, hash(pidMapping));
}
} catch (Exception e) {
@@ -46,6 +49,7 @@ public class HashEngine {
LOGGER.info("Finish update hashes.");
}
@Override
public String getPepper() {
synchronized (hashStorage) {
return pepper;

View File

@@ -0,0 +1,19 @@
package io.kamax.mxisd.hash.engine;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class NoneEngine implements Engine {
private static final Logger LOGGER = LoggerFactory.getLogger(NoneEngine.class);
@Override
public void updateHashes() {
LOGGER.info("Nothing to update.");
}
@Override
public String getPepper() {
return "";
}
}

View File

@@ -1,12 +1,12 @@
package io.kamax.mxisd.hash.rotation;
import io.kamax.mxisd.hash.HashEngine;
import io.kamax.mxisd.hash.engine.Engine;
public interface HashRotationStrategy {
void register(HashEngine hashEngine);
void register(Engine engine);
HashEngine getHashEngine();
Engine getHashEngine();
void newRequest();

View File

@@ -1,19 +1,19 @@
package io.kamax.mxisd.hash.rotation;
import io.kamax.mxisd.hash.HashEngine;
import io.kamax.mxisd.hash.engine.Engine;
public class NoOpRotationStrategy implements HashRotationStrategy {
private HashEngine hashEngine;
private Engine engine;
@Override
public void register(HashEngine hashEngine) {
this.hashEngine = hashEngine;
public void register(Engine engine) {
this.engine = engine;
}
@Override
public HashEngine getHashEngine() {
return hashEngine;
public Engine getHashEngine() {
return engine;
}
@Override

View File

@@ -1,12 +1,12 @@
package io.kamax.mxisd.hash.rotation;
import io.kamax.mxisd.hash.HashEngine;
import io.kamax.mxisd.hash.engine.Engine;
import java.util.concurrent.atomic.AtomicInteger;
public class RotationPerRequests implements HashRotationStrategy {
private HashEngine hashEngine;
private Engine engine;
private final AtomicInteger counter = new AtomicInteger(0);
private final int barrier;
@@ -15,14 +15,14 @@ public class RotationPerRequests implements HashRotationStrategy {
}
@Override
public void register(HashEngine hashEngine) {
this.hashEngine = hashEngine;
public void register(Engine engine) {
this.engine = engine;
trigger();
}
@Override
public HashEngine getHashEngine() {
return hashEngine;
public Engine getHashEngine() {
return engine;
}
@Override

View File

@@ -1,6 +1,6 @@
package io.kamax.mxisd.hash.rotation;
import io.kamax.mxisd.hash.HashEngine;
import io.kamax.mxisd.hash.engine.Engine;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
@@ -9,7 +9,7 @@ import java.util.concurrent.TimeUnit;
public class TimeBasedRotation implements HashRotationStrategy {
private final long delay;
private HashEngine hashEngine;
private Engine engine;
private final ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
public TimeBasedRotation(long delay) {
@@ -17,15 +17,15 @@ public class TimeBasedRotation implements HashRotationStrategy {
}
@Override
public void register(HashEngine hashEngine) {
this.hashEngine = hashEngine;
public void register(Engine engine) {
this.engine = engine;
Runtime.getRuntime().addShutdownHook(new Thread(executorService::shutdown));
executorService.scheduleWithFixedDelay(this::trigger, 0, delay, TimeUnit.SECONDS);
}
@Override
public HashEngine getHashEngine() {
return hashEngine;
public Engine getHashEngine() {
return engine;
}
@Override

View File

@@ -0,0 +1,5 @@
package io.kamax.mxisd.http.undertow.conduit;
public interface ConduitWithDump {
String dump();
}

View File

@@ -0,0 +1,107 @@
/*
* JBoss, Home of Professional Open Source.
* Copyright 2014 Red Hat, Inc., and individual contributors
* as indicated by the @author tags.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.kamax.mxisd.http.undertow.conduit;
import org.xnio.IoUtils;
import org.xnio.channels.StreamSourceChannel;
import org.xnio.conduits.AbstractStreamSinkConduit;
import org.xnio.conduits.ConduitWritableByteChannel;
import org.xnio.conduits.Conduits;
import org.xnio.conduits.StreamSinkConduit;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
/**
* Conduit that saves all the data that is written through it and can dump it to the console
* <p>
* Obviously this should not be used in production.
*
* @author Stuart Douglas
*/
public class DebuggingStreamSinkConduit extends AbstractStreamSinkConduit<StreamSinkConduit> implements ConduitWithDump {
private final List<byte[]> data = new CopyOnWriteArrayList<>();
/**
* Construct a new instance.
*
* @param next the delegate conduit to set
*/
public DebuggingStreamSinkConduit(StreamSinkConduit next) {
super(next);
}
@Override
public int write(ByteBuffer src) throws IOException {
int pos = src.position();
int res = super.write(src);
if (res > 0) {
byte[] d = new byte[res];
for (int i = 0; i < res; ++i) {
d[i] = src.get(i + pos);
}
data.add(d);
}
return res;
}
@Override
public long write(ByteBuffer[] dsts, int offs, int len) throws IOException {
for (int i = offs; i < len; ++i) {
if (dsts[i].hasRemaining()) {
return write(dsts[i]);
}
}
return 0;
}
@Override
public long transferFrom(final FileChannel src, final long position, final long count) throws IOException {
return src.transferTo(position, count, new ConduitWritableByteChannel(this));
}
@Override
public long transferFrom(final StreamSourceChannel source, final long count, final ByteBuffer throughBuffer) throws IOException {
return IoUtils.transfer(source, count, throughBuffer, new ConduitWritableByteChannel(this));
}
@Override
public int writeFinal(ByteBuffer src) throws IOException {
return Conduits.writeFinalBasic(this, src);
}
@Override
public long writeFinal(ByteBuffer[] srcs, int offset, int length) throws IOException {
return Conduits.writeFinalBasic(this, srcs, offset, length);
}
@Override
public String dump() {
StringBuilder sb = new StringBuilder();
for (byte[] datum : data) {
sb.append(new String(datum, StandardCharsets.UTF_8));
}
return sb.toString();
}
}

View File

@@ -0,0 +1,95 @@
/*
* JBoss, Home of Professional Open Source.
* Copyright 2014 Red Hat, Inc., and individual contributors
* as indicated by the @author tags.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.kamax.mxisd.http.undertow.conduit;
import org.xnio.IoUtils;
import org.xnio.channels.StreamSinkChannel;
import org.xnio.conduits.AbstractStreamSourceConduit;
import org.xnio.conduits.ConduitReadableByteChannel;
import org.xnio.conduits.StreamSourceConduit;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
/**
* Conduit that saves all the data that is written through it and can dump it to the console
* <p>
* Obviously this should not be used in production.
*
* @author Stuart Douglas
*/
public class DebuggingStreamSourceConduit extends AbstractStreamSourceConduit<StreamSourceConduit> implements ConduitWithDump {
private final List<byte[]> data = new CopyOnWriteArrayList<>();
/**
* Construct a new instance.
*
* @param next the delegate conduit to set
*/
public DebuggingStreamSourceConduit(StreamSourceConduit next) {
super(next);
}
public long transferTo(final long position, final long count, final FileChannel target) throws IOException {
return target.transferFrom(new ConduitReadableByteChannel(this), position, count);
}
public long transferTo(final long count, final ByteBuffer throughBuffer, final StreamSinkChannel target) throws IOException {
return IoUtils.transfer(new ConduitReadableByteChannel(this), count, throughBuffer, target);
}
@Override
public int read(ByteBuffer dst) throws IOException {
int pos = dst.position();
int res = super.read(dst);
if (res > 0) {
byte[] d = new byte[res];
for (int i = 0; i < res; ++i) {
d[i] = dst.get(i + pos);
}
data.add(d);
}
return res;
}
@Override
public long read(ByteBuffer[] dsts, int offs, int len) throws IOException {
for (int i = offs; i < len; ++i) {
if (dsts[i].hasRemaining()) {
return read(dsts[i]);
}
}
return 0;
}
@Override
public String dump() {
StringBuilder sb = new StringBuilder();
for (byte[] datum : data) {
sb.append(new String(datum, StandardCharsets.UTF_8));
}
return sb.toString();
}
}

View File

@@ -0,0 +1,23 @@
package io.kamax.mxisd.http.undertow.conduit;
import io.undertow.server.ConduitWrapper;
import io.undertow.server.HttpServerExchange;
import io.undertow.util.ConduitFactory;
import org.xnio.conduits.Conduit;
public abstract class LazyConduitWrapper<T extends Conduit> implements ConduitWrapper<T> {
private T conduit = null;
protected abstract T create(ConduitFactory<T> factory, HttpServerExchange exchange);
@Override
public T wrap(ConduitFactory<T> factory, HttpServerExchange exchange) {
conduit = create(factory, exchange);
return conduit;
}
public T get() {
return conduit;
}
}

View File

@@ -0,0 +1,186 @@
/*
* JBoss, Home of Professional Open Source.
* Copyright 2014 Red Hat, Inc., and individual contributors
* as indicated by the @author tags.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.kamax.mxisd.http.undertow.handler;
import io.kamax.mxisd.http.undertow.conduit.ConduitWithDump;
import io.kamax.mxisd.http.undertow.conduit.DebuggingStreamSinkConduit;
import io.kamax.mxisd.http.undertow.conduit.DebuggingStreamSourceConduit;
import io.kamax.mxisd.http.undertow.conduit.LazyConduitWrapper;
import io.undertow.security.api.SecurityContext;
import io.undertow.server.HttpHandler;
import io.undertow.server.HttpServerExchange;
import io.undertow.server.handlers.Cookie;
import io.undertow.util.ConduitFactory;
import io.undertow.util.HeaderValues;
import io.undertow.util.Headers;
import io.undertow.util.LocaleUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.xnio.conduits.StreamSinkConduit;
import org.xnio.conduits.StreamSourceConduit;
import java.util.Deque;
import java.util.Iterator;
import java.util.Map;
/**
* Handler that dumps a exchange to a log.
*
* @author Stuart Douglas
*/
public class RequestDumpingHandler implements HttpHandler {
private static final Logger LOGGER = LoggerFactory.getLogger(RequestDumpingHandler.class);
private final HttpHandler next;
public RequestDumpingHandler(HttpHandler next) {
this.next = next;
}
@Override
public void handleRequest(HttpServerExchange exchange) throws Exception {
LazyConduitWrapper<StreamSourceConduit> requestConduitWrapper = new LazyConduitWrapper<StreamSourceConduit>() {
@Override
protected StreamSourceConduit create(ConduitFactory<StreamSourceConduit> factory, HttpServerExchange exchange) {
return new DebuggingStreamSourceConduit(factory.create());
}
};
LazyConduitWrapper<StreamSinkConduit> responseConduitWrapper = new LazyConduitWrapper<StreamSinkConduit>() {
@Override
protected StreamSinkConduit create(ConduitFactory<StreamSinkConduit> factory, HttpServerExchange exchange) {
return new DebuggingStreamSinkConduit(factory.create());
}
};
exchange.addRequestWrapper(requestConduitWrapper);
exchange.addResponseWrapper(responseConduitWrapper);
final StringBuilder sb = new StringBuilder();
// Log pre-service information
final SecurityContext sc = exchange.getSecurityContext();
sb.append("\n----------------------------REQUEST---------------------------\n");
sb.append(" URI=").append(exchange.getRequestURI()).append("\n");
sb.append(" characterEncoding=").append(exchange.getRequestHeaders().get(Headers.CONTENT_ENCODING)).append("\n");
sb.append(" contentLength=").append(exchange.getRequestContentLength()).append("\n");
sb.append(" contentType=").append(exchange.getRequestHeaders().get(Headers.CONTENT_TYPE)).append("\n");
//sb.append(" contextPath=" + exchange.getContextPath());
if (sc != null) {
if (sc.isAuthenticated()) {
sb.append(" authType=").append(sc.getMechanismName()).append("\n");
sb.append(" principle=").append(sc.getAuthenticatedAccount().getPrincipal()).append("\n");
} else {
sb.append(" authType=none\n");
}
}
Map<String, Cookie> cookies = exchange.getRequestCookies();
if (cookies != null) {
for (Map.Entry<String, Cookie> entry : cookies.entrySet()) {
Cookie cookie = entry.getValue();
sb.append(" cookie=").append(cookie.getName()).append("=").append(cookie.getValue()).append("\n");
}
}
for (HeaderValues header : exchange.getRequestHeaders()) {
for (String value : header) {
sb.append(" header=").append(header.getHeaderName()).append("=").append(value).append("\n");
}
}
sb.append(" locale=").append(LocaleUtils.getLocalesFromHeader(exchange.getRequestHeaders().get(Headers.ACCEPT_LANGUAGE)))
.append("\n");
sb.append(" method=").append(exchange.getRequestMethod()).append("\n");
Map<String, Deque<String>> pnames = exchange.getQueryParameters();
for (Map.Entry<String, Deque<String>> entry : pnames.entrySet()) {
String pname = entry.getKey();
Iterator<String> pvalues = entry.getValue().iterator();
sb.append(" parameter=");
sb.append(pname);
sb.append('=');
while (pvalues.hasNext()) {
sb.append(pvalues.next());
if (pvalues.hasNext()) {
sb.append(", ");
}
}
sb.append("\n");
}
//sb.append(" pathInfo=" + exchange.getPathInfo());
sb.append(" protocol=").append(exchange.getProtocol()).append("\n");
sb.append(" queryString=").append(exchange.getQueryString()).append("\n");
sb.append(" remoteAddr=").append(exchange.getSourceAddress()).append("\n");
sb.append(" remoteHost=").append(exchange.getSourceAddress().getHostName()).append("\n");
//sb.append("requestedSessionId=" + exchange.getRequestedSessionId());
sb.append(" scheme=").append(exchange.getRequestScheme()).append("\n");
sb.append(" host=").append(exchange.getRequestHeaders().getFirst(Headers.HOST)).append("\n");
sb.append(" serverPort=").append(exchange.getDestinationAddress().getPort()).append("\n");
//sb.append(" servletPath=" + exchange.getServletPath());
sb.append(" isSecure=").append(exchange.isSecure()).append("\n");
exchange.addExchangeCompleteListener((exchange1, nextListener) -> {
StreamSourceConduit sourceConduit = requestConduitWrapper.get();
if (sourceConduit instanceof ConduitWithDump) {
ConduitWithDump conduitWithDump = (ConduitWithDump) sourceConduit;
sb.append("body=\n");
sb.append(conduitWithDump.dump()).append("\n");
}
// Log post-service information
sb.append("--------------------------RESPONSE--------------------------\n");
if (sc != null) {
if (sc.isAuthenticated()) {
sb.append(" authType=").append(sc.getMechanismName()).append("\n");
sb.append(" principle=").append(sc.getAuthenticatedAccount().getPrincipal()).append("\n");
} else {
sb.append(" authType=none\n");
}
}
sb.append(" contentLength=").append(exchange1.getResponseContentLength()).append("\n");
sb.append(" contentType=").append(exchange1.getResponseHeaders().getFirst(Headers.CONTENT_TYPE)).append("\n");
Map<String, Cookie> cookies1 = exchange1.getResponseCookies();
if (cookies1 != null) {
for (Cookie cookie : cookies1.values()) {
sb.append(" cookie=").append(cookie.getName()).append("=").append(cookie.getValue()).append("; domain=")
.append(cookie.getDomain()).append("; path=").append(cookie.getPath()).append("\n");
}
}
for (HeaderValues header : exchange1.getResponseHeaders()) {
for (String value : header) {
sb.append(" header=").append(header.getHeaderName()).append("=").append(value).append("\n");
}
}
sb.append(" status=").append(exchange1.getStatusCode()).append("\n");
StreamSinkConduit streamSinkConduit = responseConduitWrapper.get();
if (streamSinkConduit instanceof ConduitWithDump) {
ConduitWithDump conduitWithDump = (ConduitWithDump) streamSinkConduit;
sb.append("body=\n");
sb.append(conduitWithDump.dump());
}
sb.append("\n==============================================================");
nextListener.proceed();
LOGGER.info(sb.toString());
});
// Perform the exchange
next.handleRequest(exchange);
}
}

View File

@@ -31,6 +31,8 @@ public class HashDetailsHandler extends BasicHttpHandler {
for (HashingConfig.Algorithm algorithm : config.getAlgorithms()) {
algorithms.add(algorithm.name().toLowerCase());
}
} else {
algorithms.add(HashingConfig.Algorithm.none.name().toLowerCase());
}
response.add("algorithms", algorithms);
return response;

View File

@@ -67,10 +67,6 @@ public class HashLookupHandler extends LookupHandler implements ApiHandler {
log.info("Got bulk lookup request from {} with client {} - Is recursive? {}",
lookupRequest.getRequester(), lookupRequest.getUserAgent(), lookupRequest.isRecursive());
if (!hashManager.getConfig().isEnabled()) {
throw new InvalidParamException();
}
if (!hashManager.getHashEngine().getPepper().equals(input.getPepper())) {
throw new InvalidPepperException();
}
@@ -85,10 +81,11 @@ public class HashLookupHandler extends LookupHandler implements ApiHandler {
default:
throw new InvalidParamException();
}
hashManager.getRotationStrategy().newRequest();
}
private void noneAlgorithm(HttpServerExchange exchange, HashLookupRequest request, ClientHashLookupRequest input) throws Exception {
if (!hashManager.getConfig().getAlgorithms().contains(HashingConfig.Algorithm.none)) {
if (hashManager.getConfig().isEnabled() && !hashManager.getConfig().getAlgorithms().contains(HashingConfig.Algorithm.none)) {
throw new InvalidParamException();
}

View File

@@ -46,6 +46,4 @@ public interface LookupStrategy {
Optional<SingleLookupReply> findRecursive(SingleLookupRequest request);
CompletableFuture<List<ThreePidMapping>> find(BulkLookupRequest requests);
CompletableFuture<List<ThreePidMapping>> find(HashLookupRequest request);
}

View File

@@ -26,17 +26,23 @@ import io.kamax.matrix.json.MatrixJson;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.exception.ConfigurationException;
import io.kamax.mxisd.hash.HashManager;
import io.kamax.mxisd.hash.storage.HashStorage;
import io.kamax.mxisd.lookup.*;
import io.kamax.mxisd.lookup.ALookupRequest;
import io.kamax.mxisd.lookup.BulkLookupRequest;
import io.kamax.mxisd.lookup.SingleLookupReply;
import io.kamax.mxisd.lookup.SingleLookupRequest;
import io.kamax.mxisd.lookup.ThreePidMapping;
import io.kamax.mxisd.lookup.fetcher.IBridgeFetcher;
import io.kamax.mxisd.lookup.provider.IThreePidProvider;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.tuple.Pair;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.UnknownHostException;
import java.util.*;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
@@ -238,13 +244,4 @@ public class RecursivePriorityLookupStrategy implements LookupStrategy {
result.complete(mapFoundAll);
return bulkLookupInProgress.remove(payloadId);
}
@Override
public CompletableFuture<List<ThreePidMapping>> find(HashLookupRequest request) {
HashStorage hashStorage = hashManager.getHashStorage();
CompletableFuture<List<ThreePidMapping>> result = new CompletableFuture<>();
result.complete(hashStorage.find(request.getHashes()).stream().map(Pair::getValue).collect(Collectors.toList()));
hashManager.getRotationStrategy().newRequest();
return result;
}
}

View File

@@ -23,13 +23,18 @@ package io.kamax.mxisd.storage.ormlite;
import com.j256.ormlite.dao.CloseableWrappedIterable;
import com.j256.ormlite.dao.Dao;
import com.j256.ormlite.dao.DaoManager;
import com.j256.ormlite.db.PostgresDatabaseType;
import com.j256.ormlite.db.SqliteDatabaseType;
import com.j256.ormlite.jdbc.JdbcConnectionSource;
import com.j256.ormlite.jdbc.JdbcPooledConnectionSource;
import com.j256.ormlite.stmt.QueryBuilder;
import com.j256.ormlite.support.ConnectionSource;
import com.j256.ormlite.table.TableUtils;
import io.kamax.matrix.ThreePid;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.PolicyConfig;
import io.kamax.mxisd.config.PostgresqlStorageConfig;
import io.kamax.mxisd.config.SQLiteStorageConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.exception.ConfigurationException;
import io.kamax.mxisd.exception.InternalServerError;
import io.kamax.mxisd.exception.InvalidCredentialsException;
@@ -81,6 +86,8 @@ public class OrmLiteSqlStorage implements IStorage {
public static class Migrations {
public static final String FIX_ACCEPTED_DAO = "2019_12_09__2254__fix_accepted_dao";
public static final String FIX_HASH_DAO_UNIQUE_INDEX = "2020_03_22__1153__fix_hash_dao_unique_index";
public static final String CHANGE_TYPE_TO_TEXT_INVITE = "2020_04_21__2338__change_type_table_invites";
}
private Dao<ThreePidInviteIO, String> invDao;
@@ -91,40 +98,85 @@ public class OrmLiteSqlStorage implements IStorage {
private Dao<AcceptedDao, Long> acceptedDao;
private Dao<HashDao, String> hashDao;
private Dao<ChangelogDao, String> changelogDao;
private StorageConfig.BackendEnum backend;
public OrmLiteSqlStorage(MxisdConfig cfg) {
this(cfg.getStorage().getBackend(), cfg.getStorage().getProvider().getSqlite().getDatabase());
}
public OrmLiteSqlStorage(String backend, String path) {
if (StringUtils.isBlank(backend)) {
public OrmLiteSqlStorage(StorageConfig.BackendEnum backend, StorageConfig.Provider provider) {
if (backend == null) {
throw new ConfigurationException("storage.backend");
}
if (StringUtils.isBlank(path)) {
throw new ConfigurationException("Storage destination cannot be empty");
}
this.backend = backend;
withCatcher(() -> {
ConnectionSource connPool = new JdbcConnectionSource("jdbc:" + backend + ":" + path);
ConnectionSource connPool;
switch (backend) {
case postgresql:
connPool = createPostgresqlConnection(provider.getPostgresql());
break;
case sqlite:
connPool = createSqliteConnection(provider.getSqlite());
break;
default:
throw new ConfigurationException("storage.backend");
}
changelogDao = createDaoAndTable(connPool, ChangelogDao.class);
invDao = createDaoAndTable(connPool, ThreePidInviteIO.class);
expInvDao = createDaoAndTable(connPool, HistoricalThreePidInviteIO.class);
sessionDao = createDaoAndTable(connPool, ThreePidSessionDao.class);
asTxnDao = createDaoAndTable(connPool, ASTransactionDao.class);
accountDao = createDaoAndTable(connPool, AccountDao.class);
acceptedDao = createDaoAndTable(connPool, AcceptedDao.class);
hashDao = createDaoAndTable(connPool, HashDao.class);
acceptedDao = createDaoAndTable(connPool, AcceptedDao.class, true);
hashDao = createDaoAndTable(connPool, HashDao.class, true);
runMigration(connPool);
});
}
private ConnectionSource createSqliteConnection(SQLiteStorageConfig config) throws SQLException {
if (StringUtils.isBlank(config.getDatabase())) {
throw new ConfigurationException("Storage destination cannot be empty");
}
return new JdbcConnectionSource("jdbc:" + backend + ":" + config.getDatabase(), null, null, new SqliteDatabaseType());
}
private ConnectionSource createPostgresqlConnection(PostgresqlStorageConfig config) throws SQLException {
if (StringUtils.isBlank(config.getDatabase())) {
throw new ConfigurationException("Storage destination cannot be empty");
}
if (config.isPool()) {
LOGGER.info("Enable pooling");
JdbcPooledConnectionSource source = new JdbcPooledConnectionSource(
"jdbc:" + backend + ":" + config.getDatabase(), config.getUsername(), config.getPassword(),
new PostgresDatabaseType());
source.setMaxConnectionsFree(config.getMaxConnectionsFree());
source.setMaxConnectionAgeMillis(config.getMaxConnectionAgeMillis());
source.setCheckConnectionsEveryMillis(config.getCheckConnectionsEveryMillis());
source.setTestBeforeGet(config.isTestBeforeGetFromPool());
return source;
} else {
return new JdbcConnectionSource("jdbc:" + backend + ":" + config.getDatabase(), config.getUsername(), config.getPassword(),
new PostgresDatabaseType());
}
}
private void runMigration(ConnectionSource connPol) throws SQLException {
ChangelogDao fixAcceptedDao = changelogDao.queryForId(Migrations.FIX_ACCEPTED_DAO);
if (fixAcceptedDao == null) {
fixAcceptedDao(connPol);
changelogDao.create(new ChangelogDao(Migrations.FIX_ACCEPTED_DAO, new Date(), "Recreate the accepted table."));
}
ChangelogDao fixHashDaoUniqueIndex = changelogDao.queryForId(Migrations.FIX_HASH_DAO_UNIQUE_INDEX);
if (fixHashDaoUniqueIndex == null) {
fixHashDaoUniqueIndex(connPol);
changelogDao
.create(new ChangelogDao(Migrations.FIX_HASH_DAO_UNIQUE_INDEX, new Date(), "Add the id and migrate the unique index."));
}
ChangelogDao fixInviteTableColumnType = changelogDao.queryForId(Migrations.CHANGE_TYPE_TO_TEXT_INVITE);
if (fixInviteTableColumnType == null) {
fixInviteTableColumnType(connPol);
changelogDao.create(new ChangelogDao(Migrations.CHANGE_TYPE_TO_TEXT_INVITE, new Date(), "Modify column type to text."));
}
}
private void fixAcceptedDao(ConnectionSource connPool) throws SQLException {
@@ -133,10 +185,47 @@ public class OrmLiteSqlStorage implements IStorage {
TableUtils.createTableIfNotExists(connPool, AcceptedDao.class);
}
private void fixHashDaoUniqueIndex(ConnectionSource connPool) throws SQLException {
LOGGER.info("Migration: {}", Migrations.FIX_HASH_DAO_UNIQUE_INDEX);
TableUtils.dropTable(hashDao, true);
TableUtils.createTableIfNotExists(connPool, HashDao.class);
}
private void fixInviteTableColumnType(ConnectionSource connPool) throws SQLException {
LOGGER.info("Migration: {}", Migrations.CHANGE_TYPE_TO_TEXT_INVITE);
if (StorageConfig.BackendEnum.postgresql == backend) {
invDao.executeRawNoArgs("alter table invite_3pid alter column \"roomId\" type text");
invDao.executeRawNoArgs("alter table invite_3pid alter column id type text");
invDao.executeRawNoArgs("alter table invite_3pid alter column token type text");
invDao.executeRawNoArgs("alter table invite_3pid alter column sender type text");
invDao.executeRawNoArgs("alter table invite_3pid alter column medium type text");
invDao.executeRawNoArgs("alter table invite_3pid alter column address type text");
invDao.executeRawNoArgs("alter table invite_3pid alter column properties type text");
}
}
private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c) throws SQLException {
return createDaoAndTable(connPool, c, false);
}
/**
* Workaround for https://github.com/j256/ormlite-core/issues/20.
*/
private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c, boolean workaround) throws SQLException {
LOGGER.info("Create the dao: {}", c.getSimpleName());
Dao<V, K> dao = DaoManager.createDao(connPool, c);
TableUtils.createTableIfNotExists(connPool, c);
if (workaround && StorageConfig.BackendEnum.postgresql.equals(backend)) {
LOGGER.info("Workaround for postgresql on dao: {}", c.getSimpleName());
try {
dao.countOf();
LOGGER.info("Table exists, do nothing");
} catch (SQLException e) {
LOGGER.info("Table doesn't exist, create");
TableUtils.createTableIfNotExists(connPool, c);
}
} else {
TableUtils.createTableIfNotExists(connPool, c);
}
return dao;
}

View File

@@ -6,13 +6,16 @@ import com.j256.ormlite.table.DatabaseTable;
@DatabaseTable(tableName = "hashes")
public class HashDao {
@DatabaseField(canBeNull = false, id = true)
@DatabaseField(generatedId = true)
private Long id;
@DatabaseField(canBeNull = false, uniqueCombo = true)
private String mxid;
@DatabaseField(canBeNull = false)
@DatabaseField(canBeNull = false, uniqueCombo = true)
private String medium;
@DatabaseField(canBeNull = false)
@DatabaseField(canBeNull = false, uniqueCombo = true)
private String address;
@DatabaseField(canBeNull = false)
@@ -28,6 +31,14 @@ public class HashDao {
this.hash = hash;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getMxid() {
return mxid;
}

View File

@@ -20,6 +20,8 @@
package io.kamax.mxisd.test.storage;
import io.kamax.mxisd.config.SQLiteStorageConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.storage.ormlite.OrmLiteSqlStorage;
import org.junit.Test;
@@ -29,14 +31,22 @@ public class OrmLiteSqlStorageTest {
@Test
public void insertAsTxnDuplicate() {
OrmLiteSqlStorage store = new OrmLiteSqlStorage("sqlite", ":memory:");
StorageConfig.Provider provider = new StorageConfig.Provider();
SQLiteStorageConfig config = new SQLiteStorageConfig();
config.setDatabase(":memory:");
provider.setSqlite(config);
OrmLiteSqlStorage store = new OrmLiteSqlStorage(StorageConfig.BackendEnum.sqlite, provider);
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
store.insertTransactionResult("mxisd", "2", Instant.now(), "{}");
}
@Test(expected = RuntimeException.class)
public void insertAsTxnSame() {
OrmLiteSqlStorage store = new OrmLiteSqlStorage("sqlite", ":memory:");
StorageConfig.Provider provider = new StorageConfig.Provider();
SQLiteStorageConfig config = new SQLiteStorageConfig();
config.setDatabase(":memory:");
provider.setSqlite(config);
OrmLiteSqlStorage store = new OrmLiteSqlStorage(StorageConfig.BackendEnum.sqlite, provider);
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
}