docs: update README and ROADMAP for v1.3.0
All checks were successful
CI / lint-and-test (push) Successful in 27s
All checks were successful
CI / lint-and-test (push) Successful in 27s
This commit is contained in:
23
README.md
23
README.md
@@ -9,6 +9,8 @@ FastAPI microservice that ingests Microsoft Entra (Azure AD) and other admin aud
|
||||
- Office 365 Management Activity API client for Exchange/SharePoint/Teams admin audit logs.
|
||||
- Frontend served from the backend for filtering/searching events and viewing raw entries.
|
||||
- Optional OIDC bearer auth (Entra) to protect the API/UI and gate access by roles/groups.
|
||||
- Natural language query (`/api/ask`) powered by LLM (OpenAI, Azure OpenAI, or any compatible API).
|
||||
- MCP server for Claude Desktop / Cursor integration.
|
||||
|
||||
## Prerequisites (macOS)
|
||||
- Python 3.11
|
||||
@@ -38,6 +40,15 @@ cp .env.example .env
|
||||
|
||||
# Optional: CORS origins if the frontend is served separately
|
||||
# CORS_ORIGINS=http://localhost:3000,https://app.example.com
|
||||
|
||||
# Optional: enable AI/natural-language features (/api/ask, MCP server)
|
||||
# AI_FEATURES_ENABLED=true
|
||||
|
||||
# Optional: LLM configuration for natural language querying
|
||||
# LLM_API_KEY=...
|
||||
# LLM_BASE_URL=https://api.openai.com/v1
|
||||
# LLM_MODEL=gpt-4o-mini
|
||||
# LLM_TIMEOUT_SECONDS=30
|
||||
```
|
||||
|
||||
## Run with Docker Compose (recommended)
|
||||
@@ -66,6 +77,7 @@ uvicorn main:app --reload --host 0.0.0.0 --port 8000
|
||||
## API
|
||||
- `GET /health` — health check with MongoDB connectivity status.
|
||||
- `GET /metrics` — Prometheus metrics for request latency, fetch volume, and errors.
|
||||
- `GET /api/version` — running version (baked into the Docker image at build time).
|
||||
- `GET /api/fetch-audit-logs` — pulls the last 7 days by default (override with `?hours=N`, capped to 30 days) of:
|
||||
- Entra directory audit logs (`/auditLogs/directoryAudits`)
|
||||
- Exchange/SharePoint/Teams admin audits (via Office 365 Management Activity API)
|
||||
@@ -82,11 +94,22 @@ uvicorn main:app --reload --host 0.0.0.0 --port 8000
|
||||
- `GET /api/source-health` — last fetch status for each ingestion source (`directory`, `unified`, `intune`).
|
||||
- `PATCH /api/events/{id}/tags` — update tags on an event (e.g., `investigating`, `false_positive`).
|
||||
- `POST /api/events/{id}/comments` — add a comment to an event.
|
||||
- `POST /api/ask` — natural language query. Returns a narrative answer + referenced events. Supports time ranges, entity names, and respects active UI filters. Only available when `AI_FEATURES_ENABLED=true`.
|
||||
- `GET /api/config/features` — feature flags (`ai_features_enabled`).
|
||||
- `GET /api/rules` — list alert rules.
|
||||
- `POST /api/rules` — create an alert rule.
|
||||
- `PUT /api/rules/{id}` — update an alert rule.
|
||||
- `DELETE /api/rules/{id}` — delete an alert rule.
|
||||
|
||||
### MCP Server
|
||||
A standalone MCP server (`backend/mcp_server.py`) is included for Claude Desktop, Cursor, and other MCP clients:
|
||||
- `search_events` — filter by entity, service, operation, result, time range.
|
||||
- `get_event` — retrieve raw event JSON by ID.
|
||||
- `get_summary` — aggregated summary (service, operation, result, actor counts) for the last N days.
|
||||
- `ask` — natural language query returning recent events.
|
||||
|
||||
Configure your MCP client to run `python /path/to/aoc/backend/mcp_server.py` with `MONGO_URI` in the environment.
|
||||
|
||||
Stored document shape (collection `micro_soc.events`):
|
||||
```json
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user