Features: - Add /api/ask endpoint for plain-language audit log queries - Regex-based time/entity extraction (no LLM required for parsing) - LLM-powered narrative summarisation with OpenAI-compatible APIs - Graceful fallback to structured bullet lists when LLM is unavailable - Frontend ask panel with markdown rendering and cited events Production: - Harden Dockerfile: non-root user, gunicorn+uvicorn workers - Add docker-compose.prod.yml with internal networks and health checks - Add nginx reverse proxy with security headers - MongoDB no longer exposed externally in production Tests: - 29 new tests for ask parsing, query building, and endpoint behaviour - Fix conftest monkeypatch for routes.ask events collection Bump version to 1.1.0
44 lines
1.4 KiB
Plaintext
44 lines
1.4 KiB
Plaintext
TENANT_ID=your-tenant-id
|
|
CLIENT_ID=your-client-id
|
|
CLIENT_SECRET=your-client-secret
|
|
ENABLE_PERIODIC_FETCH=false
|
|
FETCH_INTERVAL_MINUTES=60
|
|
AUTH_ENABLED=false
|
|
AUTH_TENANT_ID=your-tenant-id
|
|
AUTH_CLIENT_ID=your-api-client-id
|
|
# API scope the SPA should request at login.
|
|
# When set, the frontend acquires an access token for this scope (aud = AUTH_CLIENT_ID).
|
|
# When empty, the frontend falls back to the idToken, which is also valid for the backend.
|
|
# Example: api://cc31fd45-1eca-431f-a2c6-ba81cd4c5d50/.default
|
|
AUTH_SCOPE=
|
|
# Comma-separated lists (optional):
|
|
AUTH_ALLOWED_ROLES=
|
|
AUTH_ALLOWED_GROUPS=
|
|
MONGO_ROOT_USERNAME=root
|
|
MONGO_ROOT_PASSWORD=example
|
|
MONGO_PORT=27017
|
|
|
|
# MongoDB connection string (takes precedence over root credentials in Docker Compose)
|
|
MONGO_URI=mongodb://root:example@localhost:27017
|
|
|
|
# Optional: number of days to retain events in MongoDB (0 = disabled)
|
|
RETENTION_DAYS=0
|
|
|
|
# Optional: comma-separated CORS origins (e.g., http://localhost:3000,https://app.example.com)
|
|
CORS_ORIGINS=*
|
|
|
|
# Optional: SIEM export webhook (e.g., Splunk HEC, Sentinel, or generic syslog webhook)
|
|
SIEM_ENABLED=false
|
|
SIEM_WEBHOOK_URL=
|
|
|
|
# Optional: enable rule-based alerting during ingestion
|
|
ALERTS_ENABLED=false
|
|
|
|
# Optional: LLM configuration for natural language querying (/api/ask)
|
|
# Supports any OpenAI-compatible API (OpenAI, Azure OpenAI, Ollama, etc.)
|
|
LLM_API_KEY=
|
|
LLM_BASE_URL=https://api.openai.com/v1
|
|
LLM_MODEL=gpt-4o-mini
|
|
LLM_MAX_EVENTS=50
|
|
LLM_TIMEOUT_SECONDS=30
|