feat: initial KosmoConnect platform v0.1
Some checks failed
CI / lint-docs (push) Has been cancelled
CI / build-firmware (push) Has been cancelled
CI / test-backend (push) Has been cancelled
CI / test-web (push) Has been cancelled

Includes:
- Backend services: ingestion (:8001), weather API (:8002),
  gateway (:8003), billing (:8004) with BTCPay integration
- Shared asyncpg pool, TimescaleDB hypertable, Redis, Mosquitto MQTT
- React frontend: Dashboard (MapLibre) and Messaging (chat UI)
- Bridge daemon for Pi + Meshtastic (Serial/TCP T-Deck support)
- Production Docker Compose, Nginx reverse proxy, ops scripts
- DEPLOY.md with step-by-step deployment guide
This commit is contained in:
2026-04-12 17:30:15 +02:00
commit 0a4fb7b55e
95 changed files with 9903 additions and 0 deletions

63
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
# Placeholder CI jobs to be expanded as components are implemented
lint-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Lint Markdown
uses: DavidAnson/markdownlint-cli2-action@v14
with:
globs: '**/*.md'
continue-on-error: true
build-firmware:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install PlatformIO
run: pip install platformio
- name: Build enviro-node firmware
run: |
echo "Firmware build not yet implemented"
# cd firmware/enviro-node && pio run
continue-on-error: true
test-backend:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
echo "Backend tests not yet implemented"
# pip install -r backend/api/requirements-dev.txt
continue-on-error: true
test-web:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: |
echo "Web tests not yet implemented"
# cd web/dashboard && npm ci && npm run lint
continue-on-error: true

55
.gitignore vendored Normal file
View File

@@ -0,0 +1,55 @@
# General
.DS_Store
*.log
*.tmp
*.swp
*.swo
.vscode/
.idea/
# Firmware
firmware/**/.pio/
firmware/**/build/
firmware/**/.vscode/
*.elf
*.bin
*.hex
# Backend
backend/**/__pycache__/
backend/**/.venv/
backend/**/venv/
backend/**/.env
backend/**/*.pyc
backend/**/.pytest_cache/
backend/**/.mypy_cache/
# Web
web/**/node_modules/
web/**/dist/
web/**/.env.local
web/**/.env.*.local
# Ops
ops/terraform/.terraform/
ops/terraform/*.tfstate
ops/terraform/*.tfstate.*
ops/terraform/*.tfvars
ops/ansible/*.retry
# Hardware
hardware/**/*.lck
hardware/**/gerbers/
hardware/**/production/
hardware/**/*.step
hardware/**/*.stp
# Kits
kits/**/*.pdf
!kits/**/manual/*.pdf
# Secrets
secrets/
*.pem
*.key
.env

314
DEPLOY.md Normal file
View File

@@ -0,0 +1,314 @@
# KosmoConnect Deployment Guide
This document walks you through deploying the entire KosmoConnect stack: cloud backend, web frontends, Raspberry Pi bridge daemon, and T-Deck integration.
---
## 1. Prerequisites
### Cloud Server (VPS or bare metal)
- **OS**: Ubuntu 22.04 LTS or Debian 12 recommended
- **RAM**: 2GB minimum, 4GB recommended
- **Ports**: 22 (SSH), 80 (HTTP), 443 (HTTPS), 1883 (MQTT — can be restricted)
- **Domain**: Optional but strongly recommended (e.g., `kosmo.example.com`)
### Raspberry Pi (Bridge Node)
- **Model**: Pi 3B+ or Pi 4
- **OS**: Raspberry Pi OS Lite (64-bit)
- **Peripherals**: Reliable power supply, internet (WiFi or Ethernet)
- **Meshtastic device**: T-Deck (WiFi mode) or T-Beam (USB)
### Local Development Machine
- Docker & Docker Compose
- Node.js 20+ and npm
- Python 3.13+ (for testing)
---
## 2. Cloud Backend Deployment
### 2.1 Prepare the Server
```bash
# On your server
sudo apt update && sudo apt upgrade -y
sudo apt install -y git docker.io docker-compose nginx certbot python3-certbot-nginx
sudo usermod -aG docker $USER
# Log out and back in for group changes to take effect
```
### 2.2 Clone the Repository
```bash
git clone https://your-repo/kosmo-connect.git
cd kosmo-connect
```
### 2.3 Configure Environment
```bash
cd backend
cp .env.prod.example .env
nano .env
```
Fill in:
- `POSTGRES_PASSWORD` — strong random password
- `BTCPAY_API_KEY`, `BTCPAY_STORE_ID`, `WEBHOOK_SECRET` — from your BTCPay Server
### 2.4 Build Web Frontends
```bash
cd ../web
./build.sh
```
This produces:
- `web/dashboard/dist/`
- `web/messaging/dist/`
### 2.5 Start Backend Services
```bash
cd ../backend
docker-compose -f docker-compose.prod.yml up -d --build
```
Services will be available on localhost ports inside the server:
- API: `127.0.0.1:8002`
- Ingestion: `127.0.0.1:8001`
- Gateway: `127.0.0.1:8003`
- Billing: `127.0.0.1:8004`
- MQTT: `127.0.0.1:1883` (only locally exposed by default)
Nginx serves the static frontends on port 80 and proxies `/api` to the correct service.
### 2.6 Seed the Database (First Time Only)
```bash
# Seed test users for dev / early testing
docker-compose -f docker-compose.prod.yml exec -T timescaledb \
psql -U kosmo -d kosmoconnect < migrations/002_seed_test_users.sql
```
### 2.7 Configure SSL (Recommended)
If you have a domain pointing to the server:
```bash
sudo certbot --nginx -d kosmo.example.com
```
Update `backend/nginx.conf` to use HTTPS and redirect HTTP to HTTPS. Then reload nginx:
```bash
docker-compose -f docker-compose.prod.yml restart nginx
```
### 2.8 Open MQTT to the Bridge (If Needed)
By default, Mosquitto only listens on `127.0.0.1:1883`. If your Pi bridge needs to connect over the internet, you have two options:
**Option A: VPN / WireGuard** (recommended for security)
- Run a WireGuard server on the cloud host
- Connect the Pi as a peer
- The Pi can then reach Mosquitto at `mosquitto:1883` internally
**Option B: Public MQTT with Authentication**
- Change the Mosquitto port binding in `docker-compose.prod.yml` to `0.0.0.0:1883`
- Enable TLS on Mosquitto and require username/password
- Update `mosquitto.conf` with authentication
---
## 3. Raspberry Pi Bridge Deployment
### 3.1 Prepare the Pi
```bash
# On the Pi
sudo apt update
sudo apt install -y python3-venv python3-pip git rsync
```
### 3.2 Deploy from Your Dev Machine
```bash
cd firmware/infrastructure-node/bridge-daemon
./deploy-pi.sh 192.168.1.50 pi
```
This copies the daemon to `/opt/kosmo-bridge` and installs the systemd service.
### 3.3 Configure the Bridge
Edit the service to point to your cloud MQTT broker:
```bash
ssh pi@192.168.1.50
sudo systemctl edit --full kosmo-bridge
```
Example for **T-Deck over WiFi**:
```ini
[Service]
Environment="PYTHONUNBUFFERED=1"
Environment="MQTT_HOST=kosmo.example.com"
Environment="MQTT_PORT=1883"
Environment="MESHTASTIC_HOST=192.168.1.45"
Environment="MESHTASTIC_TCP_PORT=4403"
Environment="GATEWAY_NODE_ID=!yourgateway01"
```
Or for **USB T-Beam**:
```ini
Environment="MQTT_HOST=kosmo.example.com"
Environment="MQTT_PORT=1883"
Environment="MESHTASTIC_DEVICE=/dev/ttyUSB0"
Environment="GATEWAY_NODE_ID=!yourgateway01"
```
### 3.4 Start and Monitor
```bash
sudo systemctl daemon-reload
sudo systemctl restart kosmo-bridge
sudo journalctl -u kosmo-bridge -f
```
---
## 4. T-Deck WiFi Setup
### 4.1 Enable WiFi on the T-Deck
Using the Meshtastic app on your phone or the Python CLI:
```bash
meshtastic --host <t-deck-ip> --set wifi_ssid "YourNetwork" --set wifi_psk "YourPassword"
```
Or via the device menu if the T-Deck firmware supports on-screen WiFi config.
### 4.2 Find the T-Deck IP
Check your router's DHCP table, or scan the network:
```bash
nmap -p 4403 192.168.1.0/24
```
You should see an open port `4403` on the T-Deck's IP.
### 4.3 Test Mesh Connectivity
Send a text message from another Meshtastic node. You should see it appear in the cloud logs:
```bash
# On the cloud server
docker-compose -f docker-compose.prod.yml logs -f gateway
```
---
## 5. BTCPay Server Webhook Configuration
1. Log in to `https://pay.cqre.net`
2. Go to your store → **Webhooks****Create Webhook**
3. Payload URL:
```
https://kosmo.example.com/api/v1/billing/webhooks/btcpay
```
4. Select events:
- `Invoice created`
- `Invoice settled`
- `Invoice expired`
- `Invoice invalid`
5. Save and copy the **Webhook Secret**
6. Paste it into your cloud server's `.env` as `WEBHOOK_SECRET`
7. Restart billing:
```bash
cd backend
docker-compose -f docker-compose.prod.yml restart billing
```
---
## 6. Post-Deployment Checklist
| Check | Command / Test |
|-------|----------------|
| Dashboard loads | Open `https://kosmo.example.com/` in browser |
| Messaging client loads | Open `https://kosmo.example.com/messaging` |
| API healthy | `curl https://kosmo.example.com/api/v1/weather/latest` |
| Gateway healthy | `curl https://kosmo.example.com/api/v1/messages/conversations -H "X-User-ID: ..."` |
| Billing healthy | `curl https://kosmo.example.com/api/v1/billing/invoices -H "X-User-ID: ..."` |
| MQTT reachable from Pi | `nc -vz kosmo.example.com 1883` (or via VPN) |
| Bridge daemon running | `ssh pi@... "sudo systemctl status kosmo-bridge"` |
| Mesh messages flow | Send text from T-Deck, check gateway logs |
| Web-to-mesh works | Send message from browser, receive on T-Deck |
| BTCPay webhook works | Create invoice, pay it, verify subscription activates |
---
## 7. Updating After Deployment
### Update Backend
```bash
cd kosmo-connect/backend
git pull
docker-compose -f docker-compose.prod.yml up -d --build
```
### Update Web Frontends
```bash
cd kosmo-connect/web
./build.sh
cd ../backend
docker-compose -f docker-compose.prod.yml restart nginx
```
### Update Bridge Daemon on Pi
```bash
cd kosmo-connect/firmware/infrastructure-node/bridge-daemon
./deploy-pi.sh 192.168.1.50 pi
ssh pi@192.168.1.50 "sudo systemctl restart kosmo-bridge"
```
---
## 8. Troubleshooting
### Dashboard shows no nodes
- Verify ingestion service is running
- Check that `simulate-bridge.py` or a real bridge is publishing to MQTT
- Inspect ingestion logs: `docker-compose -f docker-compose.prod.yml logs ingestion`
### T-Deck not reachable over TCP
- Ensure T-Deck and Pi are on the same WiFi network
- Verify port 4403 is open: `nmap -p 4403 <t-deck-ip>`
- Try restarting the T-Deck
### Bridge daemon cannot connect to MQTT
- If using VPN, verify WireGuard tunnel is up (`wg show`)
- If exposing MQTT publicly, confirm firewall rules allow port 1883
- Check Mosquitto logs: `docker-compose -f docker-compose.prod.yml logs mosquitto`
### Messages send but T-Deck never receives them
- Confirm the target `node_id` matches exactly (case-sensitive, includes `!`)
- Check gateway logs for outbound publishes
- Check bridge daemon logs for MQTT subscription hits
### BTCPay webhook not triggering subscriptions
- Verify `WEBHOOK_SECRET` matches BTCPay exactly
- Test webhook manually with a simulated payload
- Check billing logs for signature verification errors
---
*"Through openness, we preserve. Through preservation, we evolve. Through evolution, we return."*

62
README.md Normal file
View File

@@ -0,0 +1,62 @@
# KosmoConnect
**A Church of Kosmo Technology Project**
KosmoConnect is a hybrid environmental monitoring and emergency communication platform built by the Church of Kosmo on the [Meshtastic](https://meshtastic.org/) open mesh protocol. It consists of a network of solar-powered, stationary environmental stations that collect long-term weather data while simultaneously acting as relay nodes for a resilient, off-grid communication network serving the Church of Kosmo community and beyond.
## About Church of Kosmo
The Church of Kosmo is committed to building resilient, community-owned infrastructure. KosmoConnect is our flagship technology project, combining environmental stewardship with decentralized emergency communications.
## Objectives
### 1. Enviro-Node Network
Build and deploy a network of solar-powered environmental monitoring stations ("enviro-nodes") that:
- Collect weather and environmental data (temperature, humidity, pressure, wind, air quality, etc.)
- Communicate over Meshtastic mesh radio
- Relay messages for other mesh clients
- Offload stored data to central infrastructure via multi-hop routing
- Operate autonomously year-round on solar power
- Are available as buildable/salable kits
### 2. Web-to-Mesh Gateway
Create a subscription-based web service that allows users to send messages to any node on the Kosmo mesh from devices without Meshtastic radios (e.g., web browsers, smartphones). Access is limited to paying subscribers and can be restricted per-network or per-node.
## Repository Map
This is a **monorepo** tracking all components of the project. When individual components mature, they may be extracted into dedicated repositories. Boundaries are clearly marked.
| Directory | Purpose |
|-----------|---------|
| `docs/` | Architecture, requirements, API specs, deployment guides |
| `hardware/` | PCB designs, schematics, enclosures, BOMs, solar power calculations |
| `firmware/` | Enviro-node and infrastructure node firmware |
| `backend/` | Central server: data ingestion, API, message gateway, BTCPay billing |
| `web/` | Frontend applications: weather dashboard, messaging client, admin panel |
| `ops/` | Infrastructure-as-code, deployment automation, monitoring |
| `tests/` | Integration tests and hardware-in-the-loop validation |
| `kits/` | Assembly guides, packaging designs, certification docs |
| `legal/` | Terms of service, privacy policy, open-source licensing |
## Quick Links
- [System Architecture](./docs/architecture/system-overview.md)
- [Data Flow](./docs/architecture/data-flow.md)
- [Messaging Gateway Design](./docs/architecture/messaging-gateway.md)
- [Product Requirements](./docs/requirements/prd.md)
- [Hardware Overview](./hardware/README.md)
- [Dashboard Dev Guide](./web/dashboard/README.md)
- [Messaging Client Dev Guide](./web/messaging/README.md)
- [Billing Service](./backend/billing/README.md)
- [Project Roadmap](./docs/roadmap.md)
## License
KosmoConnect is a technology project of the Church of Kosmo. The project operates under the spirit of **[The Kosmic License](./legal/LICENSE-KOSMIC.md)** (KΛ 1.0).
For legal enforceability, specific components use standard licenses:
- **Software** (`firmware/`, `backend/`, `web/`, `ops/`, `tests/`): AGPL-3.0
- **Hardware** (`hardware/`, `kits/`): CERN-OHL-S-2.0
- **Documentation** (`docs/`): CC-BY-SA-4.0 or The Kosmic License
See [legal/README.md](./legal/README.md) for the full licensing guide. We also maintain a proposed improved draft at [KΛ 1.1-Draft](./legal/LICENSE-KOSMIC-DRAFT-1.1.md).

5
backend/.env.example Normal file
View File

@@ -0,0 +1,5 @@
# Copy this file to .env and adjust as needed
DATABASE_URL=postgresql://kosmo:kosmo_dev_pass@localhost:5432/kosmoconnect
MQTT_HOST=localhost
MQTT_PORT=1883
MQTT_TOPIC=kosmo/ingest/enviro

18
backend/.env.prod.example Normal file
View File

@@ -0,0 +1,18 @@
# Copy this file to .env and fill in real values before deploying
# Database
POSTGRES_USER=kosmo
POSTGRES_PASSWORD=CHANGE_ME_TO_STRONG_PASSWORD
POSTGRES_DB=kosmoconnect
DATABASE_URL=postgresql://kosmo:CHANGE_ME_TO_STRONG_PASSWORD@timescaledb:5432/kosmoconnect
# MQTT
MQTT_HOST=mosquitto
MQTT_PORT=1883
MQTT_TOPIC=kosmo/ingest/enviro
# BTCPay Server (Church of Kosmo)
BTCPAY_URL=https://pay.cqre.net
BTCPAY_API_KEY=your_btcpay_api_key
BTCPAY_STORE_ID=your_btcpay_store_id
WEBHOOK_SECRET=your_webhook_secret

21
backend/Dockerfile Normal file
View File

@@ -0,0 +1,21 @@
# KosmoConnect Backend Services
# Build context: backend/
FROM python:3.13-slim
WORKDIR /app
# Install build dependencies for packages that may need compilation
RUN apt-get update && apt-get install -y --no-install-recommends gcc && rm -rf /var/lib/apt/lists/*
# Copy requirements and install
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
ENV PYTHONPATH=/app:/app/shared
ENV PYTHONUNBUFFERED=1
# Default to API service; override CMD per service
CMD ["uvicorn", "api.src.main:app", "--host", "0.0.0.0", "--port", "8000"]

93
backend/README.md Normal file
View File

@@ -0,0 +1,93 @@
# KosmoConnect Backend
This directory contains the cloud backend services for KosmoConnect.
## Quick Start
### 1. Start Infrastructure Services
You need Docker running on your machine.
```bash
cd backend
docker-compose up -d
```
This starts:
- **TimescaleDB** on port `5432`
- **Redis** on port `6379`
- **RabbitMQ** on port `5672` (management UI on `15672`)
- **Mosquitto MQTT** on port `1883`
The first time TimescaleDB starts, it will automatically run the migration in `migrations/001_initial_schema.sql`.
### 2. Install Python Dependencies
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
### 3. Run the Services
In terminal 1:
```bash
./run-dev.sh ingestion
```
In terminal 2:
```bash
./run-dev.sh api
```
- API docs: http://localhost:8002/docs
- Ingestion health: http://localhost:8001/health
### 4. Simulate Data (No Hardware Needed)
In terminal 3:
```bash
cd ..
python3 -m venv .venv
source .venv/bin/activate
pip install -r backend/requirements.txt
python3 scripts/simulate-bridge.py --interval 5
```
This publishes fake environmental readings every 5 seconds. The ingestion service will pick them up and write them to TimescaleDB. You can then query the API:
```bash
curl "http://localhost:8002/api/v1/weather/latest"
```
## Service Architecture
| Service | Port | Responsibility |
|---------|------|----------------|
| API | 8002 | REST API for dashboard and web clients |
| Ingestion | 8001 | Subscribes to MQTT, writes sensor data to TimescaleDB |
| Gateway | 8003 | Web-to-mesh message queue, delivery tracking, and subscription enforcement |
| Billing | 8004 | BTCPay Server integration for subscriptions and invoices |
## Database Schema
- **nodes**: Registry of all enviro-nodes and infrastructure nodes
- **enviro_readings**: Time-series hypertable for sensor data
- **mesh_messages**: Delivery tracking for gateway messages
- **users / subscriptions / allowed_nodes**: Subscriber management
## Environment Variables
Copy `.env.example` to `.env` and customize:
```bash
cp .env.example .env
```
| Variable | Default | Description |
|----------|---------|-------------|
| `DATABASE_URL` | `postgresql://kosmo:kosmo_dev_pass@localhost:5432/kosmoconnect` | TimescaleDB connection |
| `MQTT_HOST` | `localhost` | MQTT broker host |
| `MQTT_PORT` | `1883` | MQTT broker port |
| `MQTT_TOPIC` | `kosmo/ingest/enviro` | Topic for enviro data |

144
backend/api/src/main.py Normal file
View File

@@ -0,0 +1,144 @@
import os
from contextlib import asynccontextmanager
from datetime import datetime, timezone
from typing import List, Optional
import asyncpg
from fastapi import FastAPI, Query
from fastapi.middleware.cors import CORSMiddleware
import sys
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../shared"))
from db import get_pool
from models import EnviroReading, Node
pool = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global pool
pool = await get_pool()
yield
await pool.close()
app = FastAPI(title="KosmoConnect API", lifespan=lifespan)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/health")
async def health():
return {"status": "ok", "service": "api"}
@app.get("/api/v1/weather/latest")
async def get_latest_readings(node_id: Optional[str] = Query(None)):
async with pool.acquire() as conn:
if node_id:
rows = await conn.fetch(
"""
SELECT DISTINCT ON (node_id)
time, node_id, temperature_c, humidity_percent, pressure_pa,
wind_speed_ms, wind_direction, pm25_ugm3, pm10_ugm3,
gas_resistance_kohm, battery_voltage, solar_voltage
FROM enviro_readings
WHERE node_id = $1
ORDER BY node_id, time DESC
""",
node_id,
)
else:
rows = await conn.fetch(
"""
SELECT DISTINCT ON (node_id)
time, node_id, temperature_c, humidity_percent, pressure_pa,
wind_speed_ms, wind_direction, pm25_ugm3, pm10_ugm3,
gas_resistance_kohm, battery_voltage, solar_voltage
FROM enviro_readings
ORDER BY node_id, time DESC
"""
)
return {"data": [dict(r) for r in rows]}
@app.get("/api/v1/weather/history")
async def get_history(
node_id: str = Query(...),
start: datetime = Query(...),
end: datetime = Query(...),
interval: Optional[str] = Query("raw", enum=["raw", "1h", "1d"]),
):
async with pool.acquire() as conn:
if interval == "raw":
rows = await conn.fetch(
"""
SELECT time, node_id, temperature_c, humidity_percent, pressure_pa,
wind_speed_ms, wind_direction, pm25_ugm3, pm10_ugm3,
gas_resistance_kohm, battery_voltage, solar_voltage
FROM enviro_readings
WHERE node_id = $1 AND time >= $2 AND time <= $3
ORDER BY time DESC
""",
node_id,
start,
end,
)
else:
bucket = "1 hour" if interval == "1h" else "1 day"
rows = await conn.fetch(
f"""
SELECT
time_bucket($4, time) AS time,
node_id,
avg(temperature_c) AS temperature_c,
avg(humidity_percent) AS humidity_percent,
avg(pressure_pa) AS pressure_pa,
avg(wind_speed_ms) AS wind_speed_ms,
avg(wind_direction)::smallint AS wind_direction,
avg(pm25_ugm3) AS pm25_ugm3,
avg(pm10_ugm3) AS pm10_ugm3,
avg(gas_resistance_kohm) AS gas_resistance_kohm,
avg(battery_voltage) AS battery_voltage,
avg(solar_voltage) AS solar_voltage
FROM enviro_readings
WHERE node_id = $1 AND time >= $2 AND time <= $3
GROUP BY time_bucket($4, time), node_id
ORDER BY time DESC
""",
node_id,
start,
end,
bucket,
)
return {"data": [dict(r) for r in rows]}
@app.get("/api/v1/nodes")
async def get_nodes():
async with pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT
id::text,
mesh_node_id,
name,
lat,
lon,
hardware_revision,
installed_at,
last_seen,
is_active
FROM nodes
WHERE is_active = true
ORDER BY last_seen DESC NULLS LAST
"""
)
return {"data": [dict(r) for r in rows]}

116
backend/billing/README.md Normal file
View File

@@ -0,0 +1,116 @@
# KosmoConnect Billing Service
Integrates with **BTCPay Server** (`pay.cqre.net`) for subscription payments.
## What It Does
- **Invoice Creation**: Generates BTCPay invoices for plan purchases (Wanderer, Guardian, Sanctuary)
- **Webhook Handling**: Listens to BTCPay Server webhooks and updates subscription status on payment
- **Subscription Activation**: On `InvoiceSettled`, extends the user's active subscription in PostgreSQL
- **Invoice History**: Lets users view their past invoices and payment status
## Why BTCPay Server?
The Church of Kosmo operates its own payment infrastructure at `pay.cqre.net`. BTCPay Server is a self-hosted, open-source Bitcoin payment processor. It enables sovereign, censorship-resistant payments without relying on third-party card processors.
## Plan Pricing
| Plan | Monthly Price | Messages | Scope |
|------|---------------|----------|-------|
| **Wanderer** | $5.00 | 50/month | Any node on the mesh |
| **Guardian** | $12.00 | 500/month | Only whitelisted nodes |
| **Sanctuary** | $50.00 | Unlimited | Any node + API/webhooks |
Prices are denominated in `USD` and paid via BTCPay Server (settled in BTC or Lightning, depending on store configuration).
## Running Locally
```bash
cd backend
export BTCPAY_URL=https://pay.cqre.net
export BTCPAY_API_KEY=your_api_key_here
export BTCPAY_STORE_ID=your_store_id_here
export WEBHOOK_SECRET=your_webhook_secret_here
./run-dev.sh billing
```
## BTCPay Server Setup Checklist
1. **Create an API Key** in your BTCPay Server instance with the following permissions:
- `Create invoice`
- `View invoices`
- `Modify store webhooks`
2. **Create a Webhook** in your BTCPay store pointing to:
```
https://your-kosmoconnect-instance/api/v1/billing/webhooks/btcpay
```
Enable events:
- `Invoice created`
- `Invoice received payment`
- `Invoice processing`
- `Invoice expired`
- `Invoice settled`
- `Invoice invalid`
3. **Set the Webhook Secret** in the billing service (`WEBHOOK_SECRET`) to verify webhook signatures.
## API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | `/api/v1/billing/invoices` | Create a new invoice for a plan |
| GET | `/api/v1/billing/invoices` | List user's invoices |
| GET | `/api/v1/billing/invoices/{invoice_id}` | Get invoice details + sync status |
| POST | `/api/v1/billing/webhooks/btcpay` | BTCPay webhook receiver |
## Example: Create an Invoice
```bash
curl -X POST http://localhost:8004/api/v1/billing/invoices \
-H "Content-Type: application/json" \
-H "X-User-ID: 11111111-1111-1111-1111-111111111111" \
-d '{"plan_type": "wanderer", "redirect_url": "https://kosmoconnect.local/thank-you"}'
```
Response:
```json
{
"invoice_id": "...",
"checkout_url": "https://pay.cqre.net/i/...",
"amount": 5.0,
"currency": "USD",
"plan_type": "wanderer"
}
```
## Webhook Payload
The billing service expects standard BTCPay Server webhook payloads. On `InvoiceSettled`, it:
1. Looks up the invoice in `btcpay_invoices`
2. Deactivates the user's previous subscription
3. Inserts a new active subscription with `valid_from = NOW()` and `valid_until = NOW() + 30 days`
4. Resets `messages_used` to `0`
## Testing Webhooks Locally
If you can't expose localhost to BTCPay, you can simulate a webhook:
```bash
curl -X POST http://localhost:8004/api/v1/billing/webhooks/btcpay \
-H "Content-Type: application/json" \
-d '{
"type": "InvoiceSettled",
"invoiceId": "your-test-invoice-id",
"status": "Settled"
}'
```
**Note:** Webhook signature verification is skipped if `WEBHOOK_SECRET` is not set.
## Troubleshooting
- **"BTCPay not configured"**: Set `BTCPAY_URL`, `BTCPAY_API_KEY`, and `BTCPAY_STORE_ID` environment variables.
- **403 on webhook**: Check that `WEBHOOK_SECRET` matches the secret configured in BTCPay Server.
- **Invoice not found on webhook**: Ensure the invoice was created through the billing service (so the `btcpay_invoice_id` exists in the database).

View File

View File

@@ -0,0 +1,79 @@
import base64
import hashlib
import hmac
import logging
from typing import Optional
import httpx
from . import config
logger = logging.getLogger("billing.btcpay")
class BTCPayClient:
def __init__(self):
self.base_url = config.BTCPAY_URL.rstrip("/")
self.api_key = config.BTCPAY_API_KEY
self.store_id = config.BTCPAY_STORE_ID
self.client = httpx.AsyncClient(
base_url=self.base_url,
headers={
"Authorization": f"token {self.api_key}",
"Content-Type": "application/json",
},
timeout=30.0,
)
async def create_invoice(
self,
amount: float,
currency: str,
order_id: str,
metadata: dict,
checkout_desc: Optional[str] = None,
) -> dict:
payload = {
"amount": amount,
"currency": currency,
"metadata": {
"orderId": order_id,
**metadata,
},
"checkout": {
"redirectURL": metadata.get("redirect_url", ""),
"redirectAutomatically": True,
},
}
if checkout_desc:
payload["metadata"]["itemDesc"] = checkout_desc
url = f"/api/v1/stores/{self.store_id}/invoices"
resp = await self.client.post(url, json=payload)
resp.raise_for_status()
return resp.json()
async def get_invoice(self, invoice_id: str) -> dict:
url = f"/api/v1/stores/{self.store_id}/invoices/{invoice_id}"
resp = await self.client.get(url)
resp.raise_for_status()
return resp.json()
def verify_webhook(self, body: bytes, signature_header: str) -> bool:
"""Verify BTCPay Server webhook signature using HMAC-SHA256."""
if not config.WEBHOOK_SECRET:
logger.warning("WEBHOOK_SECRET not set; skipping webhook verification")
return True
# BTCPay sends signature as "sha256=<hex>"
expected_prefix = "sha256="
if not signature_header.startswith(expected_prefix):
return False
received_sig = signature_header[len(expected_prefix):]
computed = hmac.new(
config.WEBHOOK_SECRET.encode(),
body,
hashlib.sha256,
).hexdigest()
return hmac.compare_digest(received_sig, computed)

View File

@@ -0,0 +1,35 @@
import os
BTCPAY_URL = os.getenv("BTCPAY_URL", "https://pay.cqre.net")
BTCPAY_API_KEY = os.getenv("BTCPAY_API_KEY", "")
BTCPAY_STORE_ID = os.getenv("BTCPAY_STORE_ID", "")
# Plan configuration: monthly price in USD (or BTC if you prefer)
PLANS = {
"wanderer": {
"name": "Wanderer",
"price": 5.00,
"currency": "USD",
"message_quota": 50,
"duration_days": 30,
"scope": "network",
},
"guardian": {
"name": "Guardian",
"price": 12.00,
"currency": "USD",
"message_quota": 500,
"duration_days": 30,
"scope": "node",
},
"sanctuary": {
"name": "Sanctuary",
"price": 50.00,
"currency": "USD",
"message_quota": None,
"duration_days": 30,
"scope": "network",
},
}
WEBHOOK_SECRET = os.getenv("WEBHOOK_SECRET", "")

274
backend/billing/src/main.py Normal file
View File

@@ -0,0 +1,274 @@
#!/usr/bin/env python3
"""
KosmoConnect Billing Service
Integrates with BTCPay Server (pay.cqre.net) for subscription payments.
"""
import json
import logging
import os
import sys
import uuid
from contextlib import asynccontextmanager
from datetime import datetime, timedelta, timezone
from typing import Optional
from fastapi import FastAPI, Header, HTTPException, Request
from fastapi.middleware.cors import CORSMiddleware
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../shared"))
from db import get_pool
from billing.src.btcpay_client import BTCPayClient
from billing.src.models import CreateInvoiceRequest, CreateInvoiceResponse
import billing.src.config as config
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("billing")
pool = None
btcpay: Optional[BTCPayClient] = None
# ============================================================
# Subscription Management
# ============================================================
async def activate_subscription(user_id: str, plan_type: str, invoice_id: str):
plan = config.PLANS.get(plan_type)
if not plan:
logger.error("Unknown plan type: %s", plan_type)
return
duration = timedelta(days=plan["duration_days"])
valid_from = datetime.now(timezone.utc)
valid_until = valid_from + duration
async with pool.acquire() as conn:
# Deactivate previous subscriptions for this user
await conn.execute(
"UPDATE subscriptions SET is_active = false WHERE user_id = $1",
user_id,
)
await conn.execute(
"""
INSERT INTO subscriptions (
id, user_id, plan_type, btcpay_invoice_id, message_quota,
messages_used, valid_from, valid_until, is_active
) VALUES ($1, $2, $3, $4, $5, 0, $6, $7, true)
""",
uuid.uuid4(),
user_id,
plan_type,
invoice_id,
plan["message_quota"],
valid_from,
valid_until,
)
logger.info("Activated %s subscription for user %s until %s", plan_type, user_id, valid_until)
async def handle_invoice_webhook(invoice_id: str, status: str):
async with pool.acquire() as conn:
row = await conn.fetchrow(
"SELECT * FROM btcpay_invoices WHERE btcpay_invoice_id = $1",
invoice_id,
)
if not row:
logger.warning("Received webhook for unknown invoice %s", invoice_id)
return
db_status = status.capitalize() if status else "Pending"
settled_at = None
if status in ("Settled", "Complete"):
db_status = "Settled"
settled_at = datetime.now(timezone.utc)
await activate_subscription(row["user_id"], row["plan_type"], invoice_id)
elif status == "Expired":
db_status = "Expired"
elif status == "Invalid":
db_status = "Invalid"
async with pool.acquire() as conn:
await conn.execute(
"""
UPDATE btcpay_invoices
SET status = $1, settled_at = COALESCE($2, settled_at), updated_at = NOW()
WHERE btcpay_invoice_id = $3
""",
db_status,
settled_at,
invoice_id,
)
# ============================================================
# FastAPI App
# ============================================================
@asynccontextmanager
async def lifespan(app: FastAPI):
global pool, btcpay
pool = await get_pool()
btcpay = BTCPayClient()
yield
await pool.close()
await btcpay.client.aclose()
app = FastAPI(title="KosmoConnect Billing Service", lifespan=lifespan)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/health")
async def health():
return {"status": "ok", "service": "billing"}
@app.post("/api/v1/billing/invoices", status_code=201)
async def create_invoice(req: CreateInvoiceRequest, x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
plan = config.PLANS.get(req.plan_type)
if not plan:
raise HTTPException(status_code=400, detail="Invalid plan type")
if not btcpay.store_id or not btcpay.api_key:
raise HTTPException(status_code=503, detail="BTCPay not configured")
order_id = f"kosmo-{x_user_id}-{req.plan_type}-{uuid.uuid4().hex[:8]}"
try:
invoice = await btcpay.create_invoice(
amount=plan["price"],
currency=plan["currency"],
order_id=order_id,
metadata={
"user_id": x_user_id,
"plan_type": req.plan_type,
"redirect_url": req.redirect_url or "",
},
checkout_desc=f"KosmoConnect {plan['name']} Plan",
)
except Exception as e:
logger.exception("BTCPay invoice creation failed: %s", e)
raise HTTPException(status_code=502, detail="Failed to create invoice with BTCPay")
async with pool.acquire() as conn:
await conn.execute(
"""
INSERT INTO btcpay_invoices (
user_id, btcpay_invoice_id, store_id, plan_type,
amount, currency, status, checkout_url, metadata
) VALUES ($1, $2, $3, $4, $5, $6, 'Pending', $7, $8)
""",
x_user_id,
invoice["id"],
btcpay.store_id,
req.plan_type,
plan["price"],
plan["currency"],
invoice.get("checkoutLink") or invoice.get("checkoutUrl", ""),
json.dumps({"order_id": order_id, "redirect_url": req.redirect_url or ""}),
)
return CreateInvoiceResponse(
invoice_id=invoice["id"],
checkout_url=invoice.get("checkoutLink") or invoice.get("checkoutUrl", ""),
amount=plan["price"],
currency=plan["currency"],
plan_type=req.plan_type,
)
@app.get("/api/v1/billing/invoices")
async def list_invoices(x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
async with pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT
btcpay_invoice_id AS invoice_id,
plan_type,
amount,
currency,
status,
checkout_url,
created_at,
settled_at
FROM btcpay_invoices
WHERE user_id = $1
ORDER BY created_at DESC
""",
x_user_id,
)
return {"data": [dict(r) for r in rows]}
@app.get("/api/v1/billing/invoices/{invoice_id}")
async def get_invoice(invoice_id: str, x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
async with pool.acquire() as conn:
row = await conn.fetchrow(
"SELECT * FROM btcpay_invoices WHERE btcpay_invoice_id = $1 AND user_id = $2",
invoice_id,
x_user_id,
)
if not row:
raise HTTPException(status_code=404, detail="Invoice not found")
# Optionally sync with BTCPay
try:
remote = await btcpay.get_invoice(invoice_id)
row_status = remote.get("status", row["status"])
if row_status != row["status"]:
await handle_invoice_webhook(invoice_id, row_status)
except Exception as e:
logger.warning("Could not sync invoice %s with BTCPay: %s", invoice_id, e)
return dict(row)
@app.post("/api/v1/billing/webhooks/btcpay")
async def btcpay_webhook(request: Request):
body = await request.body()
signature = request.headers.get("BTCPay-Sig", "")
if not btcpay.verify_webhook(body, signature):
logger.warning("BTCPay webhook signature verification failed")
raise HTTPException(status_code=401, detail="Invalid signature")
try:
payload = json.loads(body)
except json.JSONDecodeError:
raise HTTPException(status_code=400, detail="Invalid JSON")
event_type = payload.get("type", "")
invoice_id = payload.get("invoiceId")
metadata = payload.get("metadata", {})
logger.info("BTCPay webhook: type=%s invoice=%s", event_type, invoice_id)
if event_type.startswith("Invoice") and invoice_id:
# For detailed events we may still need to query BTCPay for the exact status,
# but BTCPay v2 webhooks usually include enough info in payload["status"]
status = payload.get("status", "")
if not status and event_type == "InvoiceSettled":
status = "Settled"
elif not status and event_type == "InvoiceExpired":
status = "Expired"
elif not status and event_type == "InvoiceInvalid":
status = "Invalid"
await handle_invoice_webhook(invoice_id, status)
return {"status": "ok"}

View File

@@ -0,0 +1,28 @@
from typing import Optional
from pydantic import BaseModel
class CreateInvoiceRequest(BaseModel):
plan_type: str
redirect_url: Optional[str] = None
class CreateInvoiceResponse(BaseModel):
invoice_id: str
checkout_url: str
amount: float
currency: str
plan_type: str
class WebhookPayload(BaseModel):
# BTCPay webhook payload is flexible; we only validate the parts we need
deliveryId: Optional[str] = None
webhookId: Optional[str] = None
originalDeliveryId: Optional[str] = None
isRedelivery: bool = False
type: str
timestamp: int
storeId: Optional[str] = None
invoiceId: Optional[str] = None
metadata: Optional[dict] = None

View File

@@ -0,0 +1,129 @@
services:
timescaledb:
image: timescale/timescaledb:latest-pg15
container_name: kosmo_timescaledb
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER:-kosmo}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB:-kosmoconnect}
ports:
- "127.0.0.1:5432:5432"
volumes:
- timescale_data:/var/lib/postgresql/data
- ./migrations:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-kosmo} -d ${POSTGRES_DB:-kosmoconnect}"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: kosmo_redis
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis_data:/data
mosquitto:
image: eclipse-mosquitto:2
container_name: kosmo_mosquitto
restart: unless-stopped
ports:
- "127.0.0.1:1883:1883"
volumes:
- ./mosquitto.conf:/mosquitto/config/mosquitto.conf
- mosquitto_data:/mosquitto/data
api:
build: .
container_name: kosmo_api
restart: unless-stopped
command: ["uvicorn", "api.src.main:app", "--host", "0.0.0.0", "--port", "8000"]
environment:
DATABASE_URL: ${DATABASE_URL}
MQTT_HOST: ${MQTT_HOST:-mosquitto}
MQTT_PORT: ${MQTT_PORT:-1883}
ports:
- "127.0.0.1:8002:8000"
depends_on:
timescaledb:
condition: service_healthy
mosquitto:
condition: service_started
ingestion:
build: .
container_name: kosmo_ingestion
restart: unless-stopped
command: ["uvicorn", "ingestion.src.main:app", "--host", "0.0.0.0", "--port", "8000"]
environment:
DATABASE_URL: ${DATABASE_URL}
MQTT_HOST: ${MQTT_HOST:-mosquitto}
MQTT_PORT: ${MQTT_PORT:-1883}
MQTT_TOPIC: ${MQTT_TOPIC:-kosmo/ingest/enviro}
ports:
- "127.0.0.1:8001:8000"
depends_on:
timescaledb:
condition: service_healthy
mosquitto:
condition: service_started
gateway:
build: .
container_name: kosmo_gateway
restart: unless-stopped
command: ["uvicorn", "gateway.src.main:app", "--host", "0.0.0.0", "--port", "8000"]
environment:
DATABASE_URL: ${DATABASE_URL}
MQTT_HOST: ${MQTT_HOST:-mosquitto}
MQTT_PORT: ${MQTT_PORT:-1883}
ports:
- "127.0.0.1:8003:8000"
depends_on:
timescaledb:
condition: service_healthy
mosquitto:
condition: service_started
billing:
build: .
container_name: kosmo_billing
restart: unless-stopped
command: ["uvicorn", "billing.src.main:app", "--host", "0.0.0.0", "--port", "8000"]
environment:
DATABASE_URL: ${DATABASE_URL}
BTCPAY_URL: ${BTCPAY_URL}
BTCPAY_API_KEY: ${BTCPAY_API_KEY}
BTCPAY_STORE_ID: ${BTCPAY_STORE_ID}
WEBHOOK_SECRET: ${WEBHOOK_SECRET}
ports:
- "127.0.0.1:8004:8000"
depends_on:
timescaledb:
condition: service_healthy
nginx:
image: nginx:alpine
container_name: kosmo_nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ../web/dashboard/dist:/usr/share/nginx/html/dashboard:ro
- ../web/messaging/dist:/usr/share/nginx/html/messaging:ro
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
- api
- gateway
- billing
volumes:
timescale_data:
redis_data:
mosquitto_data:

View File

@@ -0,0 +1,56 @@
services:
timescaledb:
image: timescale/timescaledb:latest-pg15
container_name: kosmo_timescaledb
environment:
POSTGRES_USER: kosmo
POSTGRES_PASSWORD: kosmo_dev_pass
POSTGRES_DB: kosmoconnect
ports:
- "5432:5432"
volumes:
- timescale_data:/var/lib/postgresql/data
- ./migrations:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U kosmo -d kosmoconnect"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: kosmo_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: kosmo_rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: kosmo
RABBITMQ_DEFAULT_PASS: kosmo_dev_pass
volumes:
- rabbitmq_data:/var/lib/rabbitmq
mosquitto:
image: eclipse-mosquitto:2
container_name: kosmo_mosquitto
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto.conf:/mosquitto/config/mosquitto.conf
- mosquitto_data:/mosquitto/data
- mosquitto_logs:/mosquitto/log
volumes:
timescale_data:
redis_data:
rabbitmq_data:
mosquitto_data:
mosquitto_logs:

104
backend/gateway/README.md Normal file
View File

@@ -0,0 +1,104 @@
# KosmoConnect Gateway Service
The **Gateway Service** handles all web-to-mesh and mesh-to-web messaging. It is the monetization boundary of the network.
## What It Does
- **Subscription Enforcement**: Validates that the user has an active subscription and that their plan allows messaging the target node
- **Quota Management**: Tracks monthly message usage and rejects requests when limits are exceeded
- **Outbound Queue**: Accepts web messages, stores them in PostgreSQL, and publishes them to MQTT for bridge delivery
- **Inbound Consumer**: Listens to `kosmo/mesh/inbound` and stores replies, automatically threading them into conversations
- **Delivery Tracking**: Message status progresses `pending -> queued -> transmitted -> delivered` (future: bridge ACKs will update to `transmitted`)
## Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | `/api/v1/messages` | Send a message to a mesh node |
| GET | `/api/v1/messages/conversations` | List all conversations for the user |
| GET | `/api/v1/messages/conversations/{node_id}` | Get full message history with a node |
| GET | `/api/v1/messages/{message_id}` | Check delivery status of a message |
## Authentication (v0.1)
For rapid development, the gateway currently uses a simple `X-User-ID` header to identify the caller. In production this will be replaced with JWT/OAuth2.
## Billing
Subscription management is handled by the [Billing Service](../billing/README.md), which integrates with the Church of Kosmo's BTCPay Server at `pay.cqre.net`. The Gateway does not process payments itself; it only reads subscription state from the shared PostgreSQL database.
## Subscription Scopes
| Plan | Scope | Quota (example) |
|------|-------|-----------------|
| `wanderer` | Any node on the mesh | 50/month |
| `guardian` | Only whitelisted nodes | 500/month |
| `sanctuary` | Any node + API/webhooks | Unlimited |
| `free` | Receive only | 0 outbound |
## Running Locally
Make sure the backend infrastructure (Postgres, MQTT) is running:
```bash
cd backend
docker-compose up -d
```
Seed test users (only needed once):
```bash
docker-compose exec -T timescaledb psql -U kosmo -d kosmoconnect < migrations/002_seed_test_users.sql
```
Start the gateway:
```bash
./run-dev.sh gateway
```
## Testing with cURL
```bash
# Send a message (test wanderer user)
curl -X POST http://localhost:8003/api/v1/messages \
-H "Content-Type: application/json" \
-H "X-User-ID: 11111111-1111-1111-1111-111111111111" \
-d '{"target_node_id": "!a1b2c3d4", "text": "Hello mesh"}'
# List conversations
curl http://localhost:8003/api/v1/messages/conversations \
-H "X-User-ID: 11111111-1111-1111-1111-111111111111"
# Check message status
curl http://localhost:8003/api/v1/messages/{message_id} \
-H "X-User-ID: 11111111-1111-1111-1111-111111111111"
```
## Architecture
```
Web Client
|
| POST /api/v1/messages (X-User-ID)
v
Gateway Service (:8003)
|- Checks subscription + quota in PostgreSQL
|- Writes message to mesh_messages (status=pending)
|- Background worker publishes pending rows to MQTT
|
v
MQTT Broker (kosmo/mesh/outbound/{node_id})
|
v
Bridge Daemon (Pi) -> Meshtastic Mesh -> Target Node
Reply path:
Target Node -> Mesh -> Bridge Daemon -> MQTT (kosmo/mesh/inbound)
|
v
Gateway Service consumes MQTT and writes reply to mesh_messages
|
v
Web Client reads via GET /api/v1/messages/conversations
```

View File

362
backend/gateway/src/main.py Normal file
View File

@@ -0,0 +1,362 @@
#!/usr/bin/env python3
"""
KosmoConnect Gateway Service
Handles web-to-mesh messaging:
- Accepts outbound messages from web clients
- Validates subscriptions and quotas
- Publishes to MQTT for bridge delivery
- Consumes inbound mesh messages and stores them as replies
- Tracks delivery status
"""
import asyncio
import json
import logging
import os
import sys
import uuid
from contextlib import asynccontextmanager
from datetime import datetime, timezone
from typing import Optional
import aiomqtt
from fastapi import FastAPI, Header, HTTPException
from fastapi.middleware.cors import CORSMiddleware
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../shared"))
from db import get_pool
from gateway.src.models import SendMessageRequest, SendMessageResponse
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("gateway")
MQTT_HOST = os.getenv("MQTT_HOST", "localhost")
MQTT_PORT = int(os.getenv("MQTT_PORT", "1883"))
MQTT_TOPIC_INBOUND = os.getenv("MQTT_TOPIC_INBOUND", "kosmo/mesh/inbound")
MQTT_TOPIC_OUTBOUND_PREFIX = os.getenv("MQTT_TOPIC_OUTBOUND_PREFIX", "kosmo/mesh/outbound")
pool = None
# ============================================================
# Subscription / Quota Enforcement
# ============================================================
async def get_active_subscription(user_id: str):
async with pool.acquire() as conn:
row = await conn.fetchrow(
"""
SELECT id, plan_type, message_quota, messages_used, valid_until
FROM subscriptions
WHERE user_id = $1 AND is_active = true
AND valid_from <= NOW() AND (valid_until IS NULL OR valid_until > NOW())
ORDER BY valid_from DESC
LIMIT 1
""",
user_id,
)
return row
async def can_send_to_node(user_id: str, target_node_id: str) -> bool:
sub = await get_active_subscription(user_id)
if not sub:
return False
if sub["plan_type"] in ("wanderer", "sanctuary"):
return True
if sub["plan_type"] == "guardian":
async with pool.acquire() as conn:
row = await conn.fetchrow(
"SELECT 1 FROM allowed_nodes WHERE user_id = $1 AND mesh_node_id = $2",
user_id,
target_node_id,
)
return row is not None
# free plan cannot send outbound
return False
async def check_and_increment_quota(user_id: str) -> bool:
sub = await get_active_subscription(user_id)
if not sub:
return False
if sub["message_quota"] is not None and sub["messages_used"] >= sub["message_quota"]:
return False
async with pool.acquire() as conn:
await conn.execute(
"UPDATE subscriptions SET messages_used = messages_used + 1 WHERE id = $1",
sub["id"],
)
return True
# ============================================================
# MQTT Outbound Worker
# ============================================================
async def mqtt_outbound_worker():
"""Background task: pick up pending messages and publish to MQTT."""
logger.info("Starting MQTT outbound worker")
while True:
try:
async with aiomqtt.Client(MQTT_HOST, MQTT_PORT) as client:
while True:
async with pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT id, target_node_id, text
FROM mesh_messages
WHERE direction = 'outbound' AND status = 'pending'
ORDER BY created_at ASC
LIMIT 50
"""
)
for row in rows:
topic = f"{MQTT_TOPIC_OUTBOUND_PREFIX}/{row['target_node_id']}"
payload = {
"message_id": str(row["id"]),
"text": row["text"],
"created_at": datetime.now(timezone.utc).isoformat(),
}
try:
await client.publish(topic, json.dumps(payload), qos=1)
async with pool.acquire() as conn:
await conn.execute(
"UPDATE mesh_messages SET status = 'queued', updated_at = NOW() WHERE id = $1",
row["id"],
)
logger.info("Published pending message %s to %s", row["id"], topic)
except Exception as e:
logger.exception("Failed to publish message %s: %s", row["id"], e)
await asyncio.sleep(2)
except aiomqtt.MqttError as e:
logger.error("MQTT outbound worker error: %s. Reconnecting in 5s...", e)
await asyncio.sleep(5)
# ============================================================
# MQTT Inbound Consumer
# ============================================================
async def mqtt_inbound_consumer():
"""Background task: consume mesh->cloud messages and store replies."""
logger.info("Starting MQTT inbound consumer")
while True:
try:
async with aiomqtt.Client(MQTT_HOST, MQTT_PORT) as client:
await client.subscribe(MQTT_TOPIC_INBOUND)
logger.info("Subscribed to %s", MQTT_TOPIC_INBOUND)
async for message in client.messages:
try:
data = json.loads(message.payload.decode())
await handle_inbound(data)
except Exception as e:
logger.exception("Failed to process inbound message: %s", e)
except aiomqtt.MqttError as e:
logger.error("MQTT inbound consumer error: %s. Reconnecting in 5s...", e)
await asyncio.sleep(5)
async def handle_inbound(data: dict):
source_node_id = data.get("source_node_id")
text = data.get("text", "")
async with pool.acquire() as conn:
# Try to match this inbound message to a user who has previously sent
# an outbound message to this node. For v0.1 we attach it to the most
# recent sender, or leave user_id NULL if unknown.
user_row = await conn.fetchrow(
"""
SELECT user_id FROM mesh_messages
WHERE direction = 'outbound' AND target_node_id = $1 AND user_id IS NOT NULL
ORDER BY created_at DESC
LIMIT 1
""",
source_node_id,
)
user_id = user_row["user_id"] if user_row else None
await conn.execute(
"""
INSERT INTO mesh_messages (
id, user_id, direction, sender_node_id, gateway_node_id,
text, status, hop_count, rssi, snr, created_at, updated_at
) VALUES ($1, $2, 'inbound', $3, $4, $5, 'delivered', $6, $7, $8, NOW(), NOW())
""",
uuid.UUID(data.get("message_id")) if data.get("message_id") else uuid.uuid4(),
user_id,
source_node_id,
data.get("gateway_node_id"),
text,
data.get("hop_count"),
data.get("rssi"),
data.get("snr"),
)
logger.info("Stored inbound message from %s", source_node_id)
# ============================================================
# FastAPI App
# ============================================================
@asynccontextmanager
async def lifespan(app: FastAPI):
global pool
pool = await get_pool()
t1 = asyncio.create_task(mqtt_outbound_worker())
t2 = asyncio.create_task(mqtt_inbound_consumer())
yield
t1.cancel()
t2.cancel()
try:
await t1
except asyncio.CancelledError:
pass
try:
await t2
except asyncio.CancelledError:
pass
await pool.close()
app = FastAPI(title="KosmoConnect Gateway Service", lifespan=lifespan)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/health")
async def health():
return {"status": "ok", "service": "gateway"}
@app.post("/api/v1/messages", status_code=202)
async def send_message(req: SendMessageRequest, x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
if not await can_send_to_node(x_user_id, req.target_node_id):
raise HTTPException(status_code=403, detail="Subscription does not allow messaging this node")
if not await check_and_increment_quota(x_user_id):
raise HTTPException(status_code=429, detail="Monthly message quota exceeded")
msg_id = uuid.uuid4()
async with pool.acquire() as conn:
await conn.execute(
"""
INSERT INTO mesh_messages (
id, user_id, direction, target_node_id, text, status, created_at, updated_at
) VALUES ($1, $2, 'outbound', $3, $4, 'pending', NOW(), NOW())
""",
msg_id,
x_user_id,
req.target_node_id,
req.text,
)
logger.info("Queued message %s from user %s to %s", msg_id, x_user_id, req.target_node_id)
return SendMessageResponse(message_id=str(msg_id), status="pending", queued_at=datetime.now(timezone.utc))
@app.get("/api/v1/messages/conversations")
async def list_conversations(x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
async with pool.acquire() as conn:
rows = await conn.fetch(
"""
WITH user_msgs AS (
SELECT
CASE
WHEN direction = 'outbound' THEN target_node_id
ELSE sender_node_id
END AS node_id,
text,
created_at,
direction,
ROW_NUMBER() OVER (PARTITION BY
CASE
WHEN direction = 'outbound' THEN target_node_id
ELSE sender_node_id
END
ORDER BY created_at DESC
) AS rn
FROM mesh_messages
WHERE user_id = $1
)
SELECT
m.node_id,
m.text AS latest_text,
m.created_at AS latest_at,
a.nickname,
COUNT(*) FILTER (WHERE m.direction = 'inbound')::int AS unread_count
FROM user_msgs m
LEFT JOIN allowed_nodes a ON a.user_id = $1 AND a.mesh_node_id = m.node_id
WHERE m.rn = 1
GROUP BY m.node_id, m.text, m.created_at, a.nickname
ORDER BY m.created_at DESC
""",
x_user_id,
)
return {"data": [dict(r) for r in rows]}
@app.get("/api/v1/messages/conversations/{node_id}")
async def get_conversation(node_id: str, x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
async with pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT
id::text,
direction,
sender_node_id,
target_node_id,
text,
status,
hop_count,
rssi,
snr,
created_at
FROM mesh_messages
WHERE user_id = $1 AND (
(direction = 'outbound' AND target_node_id = $2)
OR
(direction = 'inbound' AND sender_node_id = $2)
)
ORDER BY created_at ASC
""",
x_user_id,
node_id,
)
return {"data": [dict(r) for r in rows]}
@app.get("/api/v1/messages/{message_id}")
async def get_message_status(message_id: str, x_user_id: Optional[str] = Header(None)):
if not x_user_id:
raise HTTPException(status_code=401, detail="Missing X-User-ID header")
async with pool.acquire() as conn:
row = await conn.fetchrow(
"SELECT id::text, status, created_at, updated_at FROM mesh_messages WHERE id = $1 AND user_id = $2",
message_id,
x_user_id,
)
if not row:
raise HTTPException(status_code=404, detail="Message not found")
return dict(row)

View File

@@ -0,0 +1,35 @@
from datetime import datetime
from typing import Optional
from pydantic import BaseModel, Field
class SendMessageRequest(BaseModel):
target_node_id: str
text: str = Field(..., max_length=200)
class SendMessageResponse(BaseModel):
message_id: str
status: str
queued_at: datetime
class MessageOut(BaseModel):
id: str
direction: str
sender_node_id: Optional[str] = None
target_node_id: Optional[str] = None
text: Optional[str] = None
status: Optional[str] = None
hop_count: Optional[int] = None
rssi: Optional[int] = None
snr: Optional[float] = None
created_at: datetime
class ConversationSummary(BaseModel):
node_id: str
nickname: Optional[str] = None
latest_text: Optional[str] = None
latest_at: Optional[datetime] = None
unread_count: int = 0

View File

@@ -0,0 +1,116 @@
import asyncio
import json
import logging
from contextlib import asynccontextmanager
from datetime import datetime, timezone
import aiomqtt
from fastapi import FastAPI
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../shared"))
from db import get_pool
from models import IngestEnviroPayload, EnviroReading
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("ingestion")
MQTT_HOST = os.getenv("MQTT_HOST", "localhost")
MQTT_PORT = int(os.getenv("MQTT_PORT", "1883"))
MQTT_TOPIC = os.getenv("MQTT_TOPIC", "kosmo/ingest/enviro")
pool = None
async def handle_enviro(payload: IngestEnviroPayload):
global pool
if pool is None:
return
reading = payload.payload
async with pool.acquire() as conn:
# Ensure node exists (upsert minimal record)
await conn.execute(
"""
INSERT INTO nodes (mesh_node_id, name, lat, lon, last_seen, is_active)
VALUES ($1, $2, $3, $4, $5, true)
ON CONFLICT (mesh_node_id) DO UPDATE
SET last_seen = EXCLUDED.last_seen,
lat = COALESCE(EXCLUDED.lat, nodes.lat),
lon = COALESCE(EXCLUDED.lon, nodes.lon)
""",
payload.node_id,
payload.node_id,
payload.lat,
payload.lon,
payload.received_at,
)
# Insert reading
await conn.execute(
"""
INSERT INTO enviro_readings (
time, node_id, temperature_c, humidity_percent, pressure_pa,
wind_speed_ms, wind_direction, pm25_ugm3, pm10_ugm3,
gas_resistance_kohm, battery_voltage, solar_voltage
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)
""",
reading.time or payload.received_at,
payload.node_id,
reading.temperature_c,
reading.humidity_percent,
reading.pressure_pa,
reading.wind_speed_ms,
reading.wind_direction,
reading.pm25_ugm3,
reading.pm10_ugm3,
reading.gas_resistance_kohm,
reading.battery_voltage,
reading.solar_voltage,
)
logger.info("Ingested reading for node %s", payload.node_id)
async def mqtt_consumer():
global pool
pool = await get_pool()
logger.info("Connecting to MQTT at %s:%s", MQTT_HOST, MQTT_PORT)
while True:
try:
async with aiomqtt.Client(MQTT_HOST, MQTT_PORT) as client:
await client.subscribe(MQTT_TOPIC)
logger.info("Subscribed to %s", MQTT_TOPIC)
async for message in client.messages:
try:
data = json.loads(message.payload.decode())
payload = IngestEnviroPayload(**data)
await handle_enviro(payload)
except Exception as e:
logger.exception("Failed to process message: %s", e)
except aiomqtt.MqttError as e:
logger.error("MQTT error: %s. Reconnecting in 5s...", e)
await asyncio.sleep(5)
@asynccontextmanager
async def lifespan(app: FastAPI):
task = asyncio.create_task(mqtt_consumer())
yield
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
if pool:
await pool.close()
app = FastAPI(title="KosmoConnect Ingestion Service", lifespan=lifespan)
@app.get("/health")
async def health():
return {"status": "ok", "service": "ingestion"}

View File

@@ -0,0 +1,102 @@
-- KosmoConnect Initial Schema
-- Runs automatically when the TimescaleDB container starts for the first time
-- Enable TimescaleDB extension
CREATE EXTENSION IF NOT EXISTS timescaledb;
-- ============================================================
-- Nodes Registry
-- ============================================================
CREATE TABLE IF NOT EXISTS nodes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
mesh_node_id TEXT UNIQUE NOT NULL,
name TEXT,
lat DOUBLE PRECISION,
lon DOUBLE PRECISION,
hardware_revision TEXT DEFAULT 'v1.0',
installed_at TIMESTAMPTZ DEFAULT NOW(),
last_seen TIMESTAMPTZ,
is_active BOOLEAN DEFAULT true,
metadata JSONB DEFAULT '{}'
);
CREATE INDEX IF NOT EXISTS idx_nodes_mesh_node_id ON nodes(mesh_node_id);
CREATE INDEX IF NOT EXISTS idx_nodes_last_seen ON nodes(last_seen);
-- ============================================================
-- Environmental Readings (Time-series)
-- ============================================================
CREATE TABLE IF NOT EXISTS enviro_readings (
time TIMESTAMPTZ NOT NULL,
node_id TEXT NOT NULL REFERENCES nodes(mesh_node_id) ON DELETE CASCADE,
temperature_c DOUBLE PRECISION,
humidity_percent DOUBLE PRECISION,
pressure_pa DOUBLE PRECISION,
wind_speed_ms DOUBLE PRECISION,
wind_direction SMALLINT,
pm25_ugm3 DOUBLE PRECISION,
pm10_ugm3 DOUBLE PRECISION,
gas_resistance_kohm DOUBLE PRECISION,
battery_voltage DOUBLE PRECISION,
solar_voltage DOUBLE PRECISION
);
-- Convert to hypertable for automatic time-based partitioning
SELECT create_hypertable('enviro_readings', 'time', if_not_exists => TRUE);
CREATE INDEX IF NOT EXISTS idx_enviro_node_id_time ON enviro_readings(node_id, time DESC);
-- ============================================================
-- Mesh Messages (for gateway delivery tracking)
-- ============================================================
CREATE TABLE IF NOT EXISTS mesh_messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id) ON DELETE SET NULL,
direction TEXT NOT NULL CHECK (direction IN ('inbound', 'outbound')),
sender_node_id TEXT,
target_node_id TEXT,
gateway_node_id TEXT,
text TEXT,
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'queued', 'transmitted', 'delivered', 'failed')),
hop_count INTEGER,
rssi INTEGER,
snr DOUBLE PRECISION,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_mesh_messages_status ON mesh_messages(status);
CREATE INDEX IF NOT EXISTS idx_mesh_messages_target ON mesh_messages(target_node_id, created_at DESC);
-- ============================================================
-- Users & Subscriptions (minimal schema for Phase 1/2)
-- ============================================================
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT UNIQUE NOT NULL,
stripe_customer_id TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS subscriptions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
plan_type TEXT NOT NULL CHECK (plan_type IN ('free', 'wanderer', 'guardian', 'sanctuary')),
stripe_subscription_id TEXT,
message_quota INTEGER,
messages_used INTEGER DEFAULT 0,
valid_from TIMESTAMPTZ DEFAULT NOW(),
valid_until TIMESTAMPTZ,
is_active BOOLEAN DEFAULT true
);
CREATE INDEX IF NOT EXISTS idx_subscriptions_user ON subscriptions(user_id);
CREATE TABLE IF NOT EXISTS allowed_nodes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
mesh_node_id TEXT NOT NULL,
nickname TEXT,
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(user_id, mesh_node_id)
);

View File

@@ -0,0 +1,20 @@
-- Seed test users and subscriptions for local gateway development
-- Run manually when setting up a fresh dev environment
INSERT INTO users (id, email)
VALUES
('11111111-1111-1111-1111-111111111111', 'test@kosmoconnect.local'),
('22222222-2222-2222-2222-222222222222', 'guardian@kosmoconnect.local')
ON CONFLICT (id) DO NOTHING;
INSERT INTO subscriptions (id, user_id, plan_type, message_quota, valid_from, valid_until, is_active)
VALUES
('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '11111111-1111-1111-1111-111111111111', 'wanderer', 50, NOW(), NOW() + INTERVAL '1 year', true),
('bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', '22222222-2222-2222-2222-222222222222', 'guardian', 500, NOW(), NOW() + INTERVAL '1 year', true)
ON CONFLICT DO NOTHING;
INSERT INTO allowed_nodes (id, user_id, mesh_node_id, nickname)
VALUES
('cccccccc-cccc-cccc-cccc-cccccccccccc', '22222222-2222-2222-2222-222222222222', '!a1b2c3d4', 'Base Camp'),
('dddddddd-dddd-dddd-dddd-dddddddddddd', '22222222-2222-2222-2222-222222222222', '!b2c3d4e5', 'Lookout')
ON CONFLICT DO NOTHING;

View File

@@ -0,0 +1,24 @@
-- BTCPay Server Billing Schema
CREATE TABLE IF NOT EXISTS btcpay_invoices (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
btcpay_invoice_id TEXT UNIQUE,
store_id TEXT NOT NULL,
plan_type TEXT NOT NULL CHECK (plan_type IN ('free', 'wanderer', 'guardian', 'sanctuary')),
amount DECIMAL(16, 8) NOT NULL,
currency TEXT NOT NULL DEFAULT 'USD',
status TEXT DEFAULT 'Pending' CHECK (status IN ('Pending', 'Processing', 'Settled', 'Invalid', 'Expired')),
checkout_url TEXT,
created_at TIMESTAMPTZ DEFAULT NOW(),
settled_at TIMESTAMPTZ,
metadata JSONB DEFAULT '{}'
);
CREATE INDEX IF NOT EXISTS idx_btcpay_invoices_user ON btcpay_invoices(user_id);
CREATE INDEX IF NOT EXISTS idx_btcpay_invoices_btcpay_id ON btcpay_invoices(btcpay_invoice_id);
-- Add btcpay metadata to subscriptions for traceability
ALTER TABLE subscriptions
ADD COLUMN IF NOT EXISTS btcpay_invoice_id TEXT,
ADD COLUMN IF NOT EXISTS payment_method TEXT;

5
backend/mosquitto.conf Normal file
View File

@@ -0,0 +1,5 @@
listener 1883
allow_anonymous true
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log

49
backend/nginx.conf Normal file
View File

@@ -0,0 +1,49 @@
server {
listen 80;
server_name _;
# Dashboard
location / {
root /usr/share/nginx/html/dashboard;
try_files $uri $uri/ /index.html;
}
# Messaging client
location /messaging {
alias /usr/share/nginx/html/messaging;
try_files $uri $uri/ /messaging/index.html;
}
# API services proxy
location /api/v1/weather/ {
proxy_pass http://api:8000/api/v1/weather/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/v1/nodes {
proxy_pass http://api:8000/api/v1/nodes;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/v1/messages {
proxy_pass http://gateway:8000/api/v1/messages;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/v1/billing/ {
proxy_pass http://billing:8000/api/v1/billing/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

20
backend/requirements.txt Normal file
View File

@@ -0,0 +1,20 @@
# KosmoConnect Backend Dependencies
fastapi==0.111.0
uvicorn[standard]==0.30.0
pydantic==2.11.3
pydantic-settings==2.8.1
# Database
asyncpg==0.30.0
# MQTT
aiomqtt==2.3.0
# HTTP client (for inter-service calls, webhooks)
httpx==0.27.0
# Utilities
python-dateutil==2.9.0
# Simulator script
paho-mqtt==2.1.0

33
backend/run-dev.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -euo pipefail
# KosmoConnect Backend Dev Runner
# Usage: ./run-dev.sh [ingestion|api]
SERVICE="${1:-}"
if [ -z "$SERVICE" ]; then
echo "Usage: ./run-dev.sh [ingestion|api]"
exit 1
fi
cd "$(dirname "$0")"
export PYTHONPATH="${PYTHONPATH:-}:$(pwd)/shared"
if [ "$SERVICE" = "ingestion" ]; then
echo "Starting Ingestion Service..."
uvicorn ingestion.src.main:app --host 0.0.0.0 --port 8001 --reload
elif [ "$SERVICE" = "api" ]; then
echo "Starting API Service..."
uvicorn api.src.main:app --host 0.0.0.0 --port 8002 --reload
elif [ "$SERVICE" = "gateway" ]; then
echo "Starting Gateway Service..."
uvicorn gateway.src.main:app --host 0.0.0.0 --port 8003 --reload
elif [ "$SERVICE" = "billing" ]; then
echo "Starting Billing Service..."
uvicorn billing.src.main:app --host 0.0.0.0 --port 8004 --reload
else
echo "Unknown service: $SERVICE"
echo "Usage: ./run-dev.sh [ingestion|api]"
exit 1
fi

View File

11
backend/shared/db.py Normal file
View File

@@ -0,0 +1,11 @@
import os
import asyncpg
DB_DSN = os.getenv(
"DATABASE_URL",
"postgresql://kosmo:kosmo_dev_pass@localhost:5432/kosmoconnect"
)
async def get_pool() -> asyncpg.Pool:
return await asyncpg.create_pool(DB_DSN, min_size=2, max_size=10)

46
backend/shared/models.py Normal file
View File

@@ -0,0 +1,46 @@
from datetime import datetime
from typing import Optional
from pydantic import BaseModel, Field
class EnviroReading(BaseModel):
time: datetime
node_id: str
temperature_c: Optional[float] = None
humidity_percent: Optional[float] = None
pressure_pa: Optional[float] = None
wind_speed_ms: Optional[float] = None
wind_direction: Optional[int] = None
pm25_ugm3: Optional[float] = None
pm10_ugm3: Optional[float] = None
gas_resistance_kohm: Optional[float] = None
battery_voltage: Optional[float] = None
solar_voltage: Optional[float] = None
class Config:
from_attributes = True
class Node(BaseModel):
id: Optional[str] = None
mesh_node_id: str
name: Optional[str] = None
lat: Optional[float] = None
lon: Optional[float] = None
hardware_revision: str = "v1.0"
installed_at: Optional[datetime] = None
last_seen: Optional[datetime] = None
is_active: bool = True
class Config:
from_attributes = True
class IngestEnviroPayload(BaseModel):
type: str = Field(default="enviro_reading")
node_id: str
received_at: datetime
hop_count: Optional[int] = None
lat: Optional[float] = None
lon: Optional[float] = None
payload: EnviroReading

203
docs/api/openapi-draft.yaml Normal file
View File

@@ -0,0 +1,203 @@
openapi: 3.0.3
info:
title: KosmoConnect API
description: Draft OpenAPI specification for the KosmoConnect platform.
version: 0.1.0
paths:
/api/v1/weather/latest:
get:
summary: Get latest readings from all nodes
parameters:
- name: node_id
in: query
schema:
type: string
required: false
description: Filter by specific node ID
responses:
'200':
description: Latest environmental readings
content:
application/json:
schema:
type: object
properties:
data:
type: array
items:
$ref: '#/components/schemas/EnviroReading'
/api/v1/weather/history:
get:
summary: Get historical readings for a node
parameters:
- name: node_id
in: query
required: true
schema:
type: string
- name: start
in: query
required: true
schema:
type: string
format: date-time
- name: end
in: query
required: true
schema:
type: string
format: date-time
- name: interval
in: query
required: false
schema:
type: string
enum: [raw, 1h, 1d]
responses:
'200':
description: Historical environmental data
content:
application/json:
schema:
type: object
properties:
data:
type: array
items:
$ref: '#/components/schemas/EnviroReading'
/api/v1/messages:
post:
summary: Send a message to a mesh node
security:
- bearerAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/OutboundMessage'
responses:
'202':
description: Message accepted and queued
'403':
description: Subscription does not allow messaging this node
'429':
description: Rate limit exceeded
/api/v1/messages/conversations:
get:
summary: Get user's message conversations
security:
- bearerAuth: []
responses:
'200':
description: List of conversation summaries
content:
application/json:
schema:
type: object
properties:
data:
type: array
items:
type: object
properties:
node_id:
type: string
nickname:
type: string
latest_text:
type: string
latest_at:
type: string
format: date-time
unread_count:
type: integer
/api/v1/messages/conversations/{node_id}:
get:
summary: Get full conversation with a node
security:
- bearerAuth: []
parameters:
- name: node_id
in: path
required: true
schema:
type: string
responses:
'200':
description: Message history
content:
application/json:
schema:
type: object
properties:
data:
type: array
items:
type: object
properties:
id:
type: string
direction:
type: string
sender_node_id:
type: string
target_node_id:
type: string
text:
type: string
status:
type: string
created_at:
type: string
format: date-time
components:
schemas:
EnviroReading:
type: object
properties:
node_id:
type: string
timestamp:
type: string
format: date-time
temperature_c:
type: number
humidity_percent:
type: number
pressure_pa:
type: number
wind_speed_ms:
type: number
wind_direction:
type: integer
pm25_ugm3:
type: number
pm10_ugm3:
type: number
gas_resistance_kohm:
type: number
OutboundMessage:
type: object
properties:
target_node_id:
type: string
text:
type: string
maxLength: 200
required:
- target_node_id
- text
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT

View File

@@ -0,0 +1,145 @@
# Data Flow
This document describes how environmental data moves from the sensor to the user's web browser, and how control/commands flow in the opposite direction.
## 1. Sensor Reading & Local Storage
**Frequency**: Every 60 seconds (configurable)
**Actor**: Enviro-Node firmware
1. MCU wakes from deep sleep (or remains active if interval is short)
2. Sensors are powered on, stabilized, and read
3. Raw readings are calibrated and packaged into a compact binary format
4. The packet is appended to a local ring buffer in SPI flash or SD card
5. A "data ready" flag is set for the Meshtastic module
### Data Packet Structure (Enviro-Node Local)
```c
typedef struct {
uint32_t timestamp_unix;
int16_t temperature_c; // 0.01°C resolution
uint16_t humidity_percent; // 0.01% resolution
uint32_t pressure_pa;
uint16_t wind_speed_ms; // 0.1 m/s resolution
uint16_t wind_direction; // degrees
uint16_t pm25_ugm3;
uint16_t pm10_ugm3;
uint16_t gas_resistance_kohm;
uint8_t node_id[6]; // Meshtastic Node ID
uint16_t crc16;
} enviro_packet_t;
```
## 2. Mesh Transmission
**Frequency**: Every 5-15 minutes (configurable, power-dependent)
**Actor**: Meshtastic firmware + custom module
1. The custom module requests one or more packets from the local buffer
2. Packets are encoded into a Meshtastic `DATA` payload on a dedicated environmental channel
3. The packet is broadcast into the mesh with `want_ack = false` (fire-and-forget for efficiency)
4. If an infrastructure node is within range (direct or multi-hop), it receives the packet
5. If no ACK or route is available, the packet remains in the buffer for the next transmission window
### Channel Strategy
- **Primary Channel**: Standard Meshtastic LongFast for relaying user messages
- **Secondary Channel**: Custom `KOSMO_ENV` channel for environmental data (can use different frequency slot or SF to avoid congesting primary channel)
## 3. Bridge Ingestion
**Actor**: Infrastructure Node Bridge Daemon
1. The bridge daemon listens to Meshtastic packets via the serial/protobuf API
2. It filters for packets on the `KOSMO_ENV` channel or with a specific portnum
3. Valid environmental packets are decoded and wrapped in a JSON envelope:
```json
{
"type": "enviro_reading",
"node_id": "!a1b2c3d4",
"received_at": "2026-04-12T09:15:00Z",
"hop_count": 2,
"payload": {
"timestamp": 1744446900,
"temperature_c": 18.50,
"humidity_percent": 62.30,
...
}
}
```
4. The envelope is published to the cloud MQTT broker topic: `kosmo/ingest/enviro`
## 4. Cloud Ingestion
**Actor**: Backend Ingestion Service
1. The ingestion service subscribes to `kosmo/ingest/#`
2. On receiving a message:
- Validate JSON schema
- Verify `node_id` is registered and active
- Write raw payload to TimescaleDB hypertable `enviro_readings`
- Update node `last_seen` timestamp in PostgreSQL
- If the node has a backlog, trigger a "sync complete" notification (optional)
### Database Schema (Simplified)
```sql
-- TimescaleDB
CREATE TABLE enviro_readings (
time TIMESTAMPTZ NOT NULL,
node_id TEXT NOT NULL,
temperature_c DOUBLE PRECISION,
humidity_percent DOUBLE PRECISION,
pressure_pa DOUBLE PRECISION,
wind_speed_ms DOUBLE PRECISION,
wind_direction SMALLINT,
pm25_ugm3 DOUBLE PRECISION,
pm10_ugm3 DOUBLE PRECISION,
gas_resistance_kohm DOUBLE PRECISION
);
SELECT create_hypertable('enviro_readings', by_range('time'));
-- PostgreSQL
CREATE TABLE nodes (
id UUID PRIMARY KEY,
mesh_node_id TEXT UNIQUE NOT NULL,
name TEXT,
location GEOGRAPHY(POINT, 4326),
hardware_revision TEXT,
installed_at TIMESTAMPTZ,
last_seen TIMESTAMPTZ,
is_active BOOLEAN DEFAULT true
);
```
## 5. Web Dashboard Display
**Actor**: Web Dashboard (React)
1. User loads the dashboard
2. Frontend queries `/api/v1/weather/latest` and `/api/v1/weather/history`
3. API service fetches aggregated data from TimescaleDB
4. Frontend renders:
- Map markers with latest readings
- Time-series charts (temperature, wind, etc.)
- Node health indicators (battery, signal strength, last seen)
## 6. Command Flow (Web to Node)
For configuration updates or remote diagnostics:
1. Admin sends a command via web admin panel (e.g., "change reporting interval to 10 min")
2. API validates admin permissions
3. Command is queued in the message gateway for the specific node
4. Infrastructure node picks up the command via MQTT
5. Bridge daemon injects the command as a Meshtastic admin packet
6. Enviro-node receives and applies the config update
7. Acknowledgment (if requested) flows back through the same path
## Data Retention Policy
| Data Type | Storage Location | Retention |
|-----------|-----------------|-----------|
| Raw sensor readings | Enviro-Node flash | 30-90 days (ring buffer) |
| Ingested readings | TimescaleDB | 2 years raw, then downsampled |
| Downsampled aggregates | TimescaleDB | Indefinite |
| Mesh messages | PostgreSQL | 90 days |
| Audit logs | PostgreSQL | 1 year |

View File

@@ -0,0 +1,188 @@
# Messaging Gateway Architecture
The Messaging Gateway is the bridge between the internet (web users) and the Meshtastic mesh. It is the primary monetization surface for the project.
## Business Rules
1. **The mesh is open**: Anyone with a Meshtastic device can join the Kosmo mesh and send/receive messages locally for free.
2. **The gateway is gated**: Sending a message from the internet to the mesh requires an active subscription.
3. **Authorization granularity**:
- **Network-level**: Subscriber can send to any node reachable through the gateway.
- **Node-level**: Subscriber can send only to specific whitelisted nodes (e.g., family members).
- Future: **Group-level** access for organizations.
## Gateway Flow: Web → Mesh
```
User (Web Browser)
┌──────────────┐
│ Web API │ <-- Validates JWT, checks subscription status
│ /messages │
└──────┬───────┘
┌──────────────┐
│ Billing │ <-- Confirms subscriber has active plan & quota remaining
│ Service │
└──────┬───────┘
┌──────────────┐
│ Message │ <-- Writes message to outbound queue (RabbitMQ / Redis)
│ Gateway │ Topic: `mesh.outbound.{node_id}`
└──────┬───────┘
│ MQTT / TLS
┌──────────────┐
│ Infrastructure│ <-- Bridge daemon reads queue
│ Node │
└──────┬───────┘
│ Serial / protobuf API
┌──────────────┐
│ Meshtastic │ <-- Broadcasts text message to target node ID
│ Radio │
└───────────────┘
```
## Gateway Flow: Mesh → Web
Replies and inbound messages from the mesh to a subscriber:
```
Meshtastic Radio (any node)
┌──────────────┐
│ Infrastructure│ <-- Receives mesh message
│ Node │
└──────┬───────┘
┌──────────────┐
│ Bridge │ <-- Publishes to `kosmo/mesh/inbound`
│ Daemon │
└──────┬───────┘
┌──────────────┐
│ Message │ <-- Matches sender node ID to subscriber inboxes
│ Gateway │
└──────┬───────┘
┌──────────────┐
│ Web API │ <-- Stores in user's inbox, sends push notification
│ Inbox │
└───────────────┘
```
## Subscription Models
### Plan Tiers (Example)
| Tier | Price | Messages/Month | Scope | Features |
|------|-------|----------------|-------|----------|
| Free | $0 | 5 (inbound only) | Inbox | Receive replies, view weather |
| Wanderer | $5/mo | 50 | Network | Send to any node |
| Guardian | $12/mo | 500 | Node-level | Manage up to 5 linked nodes |
| Sanctuary | $50/mo | Unlimited | Network + API | Bulk messaging, webhook access |
All paid plans are processed through the Church of Kosmo's self-hosted BTCPay Server at `pay.cqre.net`.
### Authorization Check
When a user attempts to send a message:
```python
def can_send(user: User, target_node_id: str) -> bool:
subscription = user.active_subscription()
if not subscription or subscription.is_expired():
return False
if subscription.plan == "network":
return True
if subscription.plan == "node_level":
return user.allowed_nodes.filter(mesh_node_id=target_node_id).exists()
return False
```
## Message Queue Schema
### Outbound (Cloud → Mesh)
```json
{
"message_id": "uuid-v4",
"sender_user_id": "uuid-v4",
"target_node_id": "!a1b2c3d4",
"text": "Hello from the web!",
"priority": "normal",
"max_hops": 7,
"want_ack": true,
"created_at": "2026-04-12T09:20:00Z",
"retry_count": 0
}
```
### Inbound (Mesh → Cloud)
```json
{
"message_id": "uuid-v4",
"source_node_id": "!a1b2c3d4",
"gateway_node_id": "!gateway01",
"text": "Reply from the woods",
"hop_count": 3,
"rssi": -90,
"snr": 8.5,
"received_at": "2026-04-12T09:25:00Z"
}
```
## Rate Limiting & Anti-Spam
- **Per-user**: Max 1 message per 10 seconds, burst of 5
- **Per-subscription tier**: Enforced monthly quotas
- **Per-target-node**: Max 10 web messages per hour (to prevent harassment)
- **Content filtering**: Basic profanity/spam filter on the gateway
- **Blocklist**: Users and nodes can block each other
## Delivery Tracking
The gateway tracks message state:
```
PENDING -> QUEUED -> TRANSMITTED -> DELIVERED (ACK received)
|
+-> FAILED (max retries exceeded)
```
Users see delivery status in the messaging UI:
- Single checkmark: Queued
- Double checkmark: Transmitted by gateway
- Blue double checkmark: Delivered (ACK from target node)
## Billing Integration
The gateway relies on the **Billing Service** to enforce subscriptions. The billing service:
- Creates invoices via BTCPay Server Greenfield API
- Listens to BTCPay webhooks for payment confirmation
- Manages subscription validity periods and quotas in PostgreSQL
- Deactivates old subscriptions and resets quotas on successful payment
## Security Considerations
1. **Authentication**: JWT-based auth for web users, API keys for bridge daemons
2. **Encryption**: Mesh messages are encrypted with the channel key. The bridge daemon does not decrypt content; it only forwards the encrypted payload.
3. **Privacy**: The gateway logs message metadata (sender, recipient, timestamp, size) but does not log message content.
4. **Node Impersonation**: Web messages are tagged with a special prefix or sender ID indicating they originated from the gateway, preventing spoofing of local mesh nodes.
## Fallback Behavior
If no infrastructure node is currently online:
- Outbound messages remain queued for up to 24 hours
- Users are notified that delivery is delayed
- If the queue expires, the message is marked as failed and the user's quota is refunded

View File

@@ -0,0 +1,123 @@
# System Architecture Overview
## High-Level Concept
KosmoConnect is a **three-tier system**:
1. **Edge Tier**: Solar-powered enviro-nodes running Meshtastic + custom sensor firmware
2. **Bridge Tier**: Infrastructure nodes with internet backhaul (WiFi/Ethernet/Cellular)
3. **Cloud Tier**: Central backend services and web frontends
```
┌─────────────────────────────────────────────────────────────────────────┐
│ CLOUD TIER │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌─────────────┐ │
│ │ Web API │ │ Ingestion │ │ Message │ │ Billing │ │
│ │ (Fastify/ │ │ Service │ │ Gateway │ │ & Auth │ │
│ │ Django) │ │ (TimescaleDB│ │ (RabbitMQ/ │ │ (Stripe) │ │
│ │ │ │ + Redis) │ │ MQTT) │ │ │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └──────┬──────┘ │
│ │ │ │ │ │
│ ┌──────▼─────────────────▼─────────────────▼─────────────────▼──────┐ │
│ │ PostgreSQL │ │
│ │ (Users, Nodes, Subscriptions) │ │
│ └───────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│ HTTPS / MQTT over TLS
┌─────────────────────────────────────────────────────────────────────────┐
│ BRIDGE TIER │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Infrastructure Node │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │ │
│ │ │ Meshtastic │ │ Bridge │ │ Backhaul (WiFi/Eth/ │ │ │
│ │ │ Radio │◄─┤ Daemon │◄─┤ Cellular) │ │ │
│ │ │ (SX1262) │ │ (Python) │ │ │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ (Mains Powered) │
└─────────────────────────────────────────────────────────────────────────┘
│ LoRa / Mesh
┌─────────────────────────────────────────────────────────────────────────┐
│ EDGE TIER │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Enviro-Node │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │ │
│ │ │ BME280 │ │ Wind │ │ Air │ │ Meshtastic │ │ │
│ │ │ (T/H/P) │ │ Sensor │ │ Quality │ │ Firmware │ │ │
│ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ + Sensor Module │ │ │
│ │ └─────────────┴─────────────┘ │ + Store/Forward │ │ │
│ │ │ │ + Power Manager │ │ │
│ │ ┌──────▼──────┐ └─────────┬─────────┘ │ │
│ │ │ ESP32/ │ │ │ │
│ │ │ nRF52840 │◄────────────────────────┘ │ │
│ │ └──────┬──────┘ │ │
│ │ │ │ │
│ │ ┌──────▼──────┐ │ │
│ │ │ Solar + │ │ │
│ │ │ Battery │ │ │
│ │ └─────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ (Solar Powered) │
└─────────────────────────────────────────────────────────────────────────┘
```
## Core Principles
### 1. Open Mesh, Gated Gateway
The Meshtastic mesh itself is open. Anyone with a compatible device can join, extend range, and benefit from the enviro-node relay infrastructure. However, access to the **web-to-mesh gateway** (sending messages from the internet to the mesh) is restricted to paying subscribers.
### 2. Store-and-Forward Data Offload
Enviro-nodes collect data continuously but may not always have a direct route to an infrastructure node. Data is buffered in local flash/SD and transmitted when a route becomes available. The Meshtastic store-and-forward module may be leveraged or extended.
### 3. Separation of Concerns
- **Meshtastic handles**: Mesh routing, encryption, device-to-device messaging, channel management
- **Custom firmware handles**: Sensor reading, power management, data buffering, packet formatting
- **Backend handles**: User auth, subscription billing, data persistence, web APIs, message queuing
- **Bridge handles**: Protocol translation between Meshtastic protobufs and cloud MQTT/HTTPS
## Component Boundaries
### Enviro-Node (Edge)
**Hardware**: Custom PCB based on ESP32-S3-WROOM-1 or nRF52840 + SX1262, sensor headers, solar charge controller, battery management.
**Firmware**: Either a Meshtastic firmware fork with a custom sensor module, or a companion MCU architecture where Meshtastic runs on one chip and a sensor controller runs on another.
### Infrastructure Node (Bridge)
**Hardware**: Meshtastic device (LILYGO T-Beam, RAK4631, or custom) with reliable internet backhaul.
**Software**: A bridge daemon running alongside the Meshtastic firmware (via serial/API) that forwards environmental data to the cloud and injects outbound mesh messages from the cloud queue.
### Central Backend (Cloud)
- **Ingestion Service**: Consumes MQTT from infrastructure nodes, validates, writes to TimescaleDB
- **API Service**: REST/GraphQL API for weather data, node registry, health status
- **Message Gateway**: Manages the queue of web-to-mesh messages, handles delivery confirmations, rate limiting
- **Billing & Auth**: Stripe integration for subscriptions, OAuth2/JWT for user auth, node-level permission checks
### Web Frontend (Cloud)
- **Dashboard**: Map-based weather visualization, node health, historical charts
- **Messaging Client**: Compose messages to mesh nodes by node ID or alias, view replies
- **Admin Panel**: Node onboarding, subscriber management, network diagnostics
## Technology Stack Recommendations
| Layer | Technology |
|-------|------------|
| Enviro-Node MCU | ESP32-S3 (for power/performance) or nRF52840 (for efficiency) |
| Radio | Semtech SX1262 (Meshtastic standard) |
| Sensors | BME680 (T/H/P/Gas), SPS30 (PM), Davis anemometer (wind) |
| Bridge Daemon | Python with `meshtastic` CLI library + `paho-mqtt` |
| Backend Runtime | Python (FastAPI) or Node.js (NestJS) |
| Database (Time-series) | TimescaleDB or InfluxDB |
| Database (Relational) | PostgreSQL |
| Message Queue | RabbitMQ or Redis Streams |
| Frontend | React / Vue + MapLibre GL |
| Infra | Docker, Terraform, Ansible |
## Scalability Considerations
- A single infrastructure node can serve a large mesh area, but dense networks benefit from multiple infrastructure nodes for redundancy.
- Environmental data is small and infrequent (e.g., one packet every 5-15 minutes), so bandwidth is not a concern.
- Web-to-mesh messaging is low bandwidth but requires delivery tracking and rate limiting to prevent spam.
- The system should gracefully degrade if the cloud is unreachable; the mesh continues to function locally.

151
docs/requirements/prd.md Normal file
View File

@@ -0,0 +1,151 @@
# Product Requirements Document
## Project Name
KosmoConnect
## Steward
Church of Kosmo Technology Division
## Mission Statement
Build a resilient, solar-powered environmental monitoring network that doubles as an emergency communication backbone for the Church of Kosmo community and beyond. KosmoConnect is a technology project of the Church of Kosmo, developed in the open and operated as community infrastructure.
---
## Objective 1: Enviro-Node Network
### 1.1 Enviro-Node Hardware
**REQ-HW-001**: The enviro-node must be capable of year-round autonomous operation on solar power in temperate climates.
**REQ-HW-002**: The enviro-node must measure at minimum:
- Air temperature
- Relative humidity
- Barometric pressure
- Wind speed and direction
**REQ-HW-003**: The enviro-node should optionally support:
- Particulate matter (PM2.5, PM10)
- Volatile organic compounds / gas resistance
- UV index
- Rainfall
**REQ-HW-004**: The enviro-node enclosure must be IP65 rated or better.
**REQ-HW-005**: The enviro-node must operate in temperatures from -20°C to +50°C.
**REQ-HW-006**: The enviro-node must use a Meshtastic-compatible LoRa radio (SX1262 recommended).
**REQ-HW-007**: The enviro-node must buffer at least 30 days of 15-minute readings locally.
### 1.2 Enviro-Node Firmware
**REQ-FW-001**: The firmware must read sensors at a configurable interval (default: 60 seconds).
**REQ-FW-002**: The firmware must store readings in a resilient local ring buffer.
**REQ-FW-003**: The firmware must transmit accumulated readings over Meshtastic at a configurable interval (default: 15 minutes during daylight, 60 minutes at night).
**REQ-FW-004**: The firmware must implement power management to maximize battery life, including deep sleep between intervals.
**REQ-FW-005**: The firmware must act as a standard Meshtastic relay, forwarding messages for other mesh clients.
**REQ-FW-006**: The firmware must support remote configuration updates over the mesh (admin channel).
**REQ-FW-007**: The firmware must report its own health status (battery voltage, solar input voltage, free storage, temperature).
### 1.3 Infrastructure Nodes
**REQ-INF-001**: Infrastructure nodes must bridge the Meshtastic mesh to the internet.
**REQ-INF-002**: Infrastructure nodes must support at least one backhaul method: WiFi, Ethernet, or LTE.
**REQ-INF-003**: Infrastructure nodes must be mains-powered or have a large enough battery/solar setup for 99.9% uptime.
**REQ-INF-004**: Infrastructure nodes must forward environmental data packets to the cloud backend without decrypting content.
**REQ-INF-005**: Infrastructure nodes must inject outbound web-to-mesh messages into the mesh.
### 1.4 Central Weather Service
**REQ-WS-001**: The service must ingest environmental data from all registered nodes.
**REQ-WS-002**: The service must provide a public map showing current conditions at each node.
**REQ-WS-003**: The service must provide historical charts for each sensor type at each node.
**REQ-WS-004**: The service must display node health (online/offline, battery level, last seen).
**REQ-WS-005**: The service must expose a public API for reading weather data (rate-limited).
---
## Objective 2: Web-to-Mesh Gateway
### 2.1 User Subscription
**REQ-SUB-001**: Users must be able to create an account and subscribe to a paid plan via BTCPay Server (pay.cqre.net).
**REQ-SUB-002**: The system must support at least two authorization scopes:
- **Network scope**: Send messages to any node on the mesh.
- **Node scope**: Send messages only to specific whitelisted nodes.
**REQ-SUB-003**: Users must be able to link Meshtastic node IDs to their account for receiving replies.
**REQ-SUB-004**: The system must enforce monthly message quotas based on the subscription tier.
**REQ-SUB-005**: Users must receive email notifications for subscription events (payment received, renewal, expiration).
### 2.2 Web Messaging
**REQ-MSG-001**: Subscribers must be able to compose and send text messages to mesh nodes from a web browser.
**REQ-MSG-002**: The web UI must show delivery status (queued, transmitted, delivered, failed).
**REQ-MSG-003**: Subscribers must be able to receive replies from mesh nodes in their web inbox.
**REQ-MSG-004**: The system must support push notifications (browser or email) for incoming replies.
**REQ-MSG-005**: Messages must be rate-limited to prevent spam and network abuse.
**REQ-MSG-006**: Messages from the gateway must be clearly identified as originating from the internet to mesh users.
### 2.3 Admin & Operations
**REQ-ADM-001**: Admins must be able to register new enviro-nodes and infrastructure nodes.
**REQ-ADM-002**: Admins must be able to view system-wide metrics (nodes online, messages sent, data ingested).
**REQ-ADM-003**: Admins must be able to broadcast emergency alerts to all mesh nodes via the gateway.
**REQ-ADM-004**: The system must generate monthly reports on network health and subscription revenue.
---
## Non-Functional Requirements
**REQ-NF-001**: The mesh must remain functional for local communication even if the cloud backend is unreachable.
**REQ-NF-002**: All cloud communications must use TLS 1.3 or better.
**REQ-NF-003**: The backend must horizontally scale to support at least 1,000 active nodes and 10,000 subscribers.
**REQ-NF-004**: The enviro-node hardware designs and firmware must be open-source.
**REQ-NF-005**: The web-to-mesh gateway software must be open-source, but the hosted instance may be operated as a paid service.
**REQ-NF-006**: The system must comply with GDPR / CCPA for user data.
**REQ-NF-007**: The system must comply with local RF regulations (FCC, CE, etc.) for the intended deployment regions.
---
## Success Metrics
| Metric | Target |
|--------|--------|
| Enviro-node uptime (sunny season) | >95% |
| Enviro-node uptime (winter) | >80% |
| Data delivery success rate | >98% |
| Web-to-mesh delivery time | <5 minutes (when infrastructure node is in range) |
| Subscriber churn rate | <5% monthly |
| Kit assembly time | <4 hours for a moderately technical user |

150
docs/roadmap.md Normal file
View File

@@ -0,0 +1,150 @@
# KosmoConnect Project Roadmap
*A technology project of the Church of Kosmo*
---
## Phase 0: Foundation & Alignment
**Duration:** 23 months
**Goal:** Validate the core concept, secure resources, and establish the legal/technical bedrock.
### Milestones
| # | Deliverable | Success Criteria |
|---|-------------|------------------|
| 0.1 | **Project Charter Ratified** | Church of Kosmo leadership approves mission, budget, and open-source licensing strategy. |
| 0.2 | **License Stack Finalized** | Decide whether to adopt KΛ 1.1-Draft or remain on KΛ 1.0 + AGPL-3.0 + CERN-OHL-S-2.0. |
| 0.3 | **Reference Hardware Bench** | Procure 2× ESP32-S3 dev boards, 2× SX1262 modules, 2× LILYGO T-Beams, reference sensors. |
| 0.4 | **Reference Sensor Validation** | Confirm BME680 and SPS30 accuracy against a calibrated weather station over 2 weeks. |
| 0.5 | **Power Budget Proven** | Build a spreadsheet-validated, lab-measured power model proving 80+ days battery autonomy. |
| 0.6 | **Repo & CI Operational** | All placeholder CI jobs replaced with real builds; contribution guidelines published. |
### Key Decisions
- MCU: ESP32-S3 (confirmed) vs. nRF52840 (deferred to v2)
- Backhaul: WiFi-first for infrastructure nodes; LTE as v1.5 upgrade
- Cloud provider: Hetzner / DigitalOcean / AWS (to be selected)
---
## Phase 1: Proof of Concept ("Genesis Node")
**Duration:** 34 months
**Goal:** One end-to-end enviro-node → bridge → cloud → dashboard chain working in a controlled environment.
### Milestones
| # | Deliverable | Success Criteria |
|---|-------------|------------------|
| 1.1 | **Meshtastic Fork with Sensor Module** | Enviro-node firmware reads BME680 + SPS30 and injects a custom data packet into the mesh. |
| 1.2 | **Local Data Buffer** | Ring buffer in SPI flash stores 7 days of readings and survives reboots. |
| 1.3 | **Bridge Daemon v0.1** | Python daemon on Raspberry Pi forwards mesh packets to a local MQTT broker. |
| 1.4 | **Ingestion Service v0.1** | FastAPI service consumes MQTT, writes to TimescaleDB, exposes `/latest` and `/history`. |
| 1.5 | **Dashboard v0.1** | React app displays a single node on a map with live temperature, humidity, and pressure. |
| 1.6 | **Genesis Node Deployed** | One prototype node + one bridge node installed on Church of Kosmo property, running 24/7 for 30 days. |
### Phase 1 Metrics
- Data delivery success rate: >90%
- Dashboard uptime: >95%
- Mesh packet success (single hop): >95%
---
## Phase 2: Pilot Network ("Kosmo Constellation")
**Duration:** 46 months
**Goal:** Deploy 35 enviro-nodes in one geographic region with one or more infrastructure nodes. Onboard the first paying subscribers to the web-to-mesh gateway.
### Milestones
| # | Deliverable | Success Criteria |
|---|-------------|------------------|
| 2.1 | **Enviro-Node PCB v1.0** | First fabricated PCB integrating MCU, radio, sensor headers, and power management. |
| 2.2 | **Multi-Hop Data Offload** | Nodes 2+ hops from infrastructure successfully transmit buffered data via intermediate relays. |
| 2.3 | **Gateway Service v0.1** | Subscribers can send web messages to any node; delivery status tracked (queued → transmitted). |
| 2.4 | **Billing Integration** | Stripe subscription flow live; at least two plan tiers functional (Wanderer, Guardian). |
| 2.5 | **Messaging Client v0.1** | Web inbox supports composing, receiving replies, and viewing delivery status. |
| 2.6 | **Admin Panel v0.1** | Node registration, subscriber lookup, and emergency broadcast functional. |
| 2.7 | **Pilot Deployment** | 35 nodes cover a local watershed or community area; 1020 beta subscribers active. |
### Phase 2 Metrics
- Enviro-node uptime (summer): >95%
- Web-to-mesh delivery time: <10 minutes
- Subscriber churn: <10% monthly (beta expectation)
- Kit assembly time (internal test): <4 hours
---
## Phase 3: Production Network & Kits ("The Open Mesh")
**Duration:** 69 months
**Goal:** Scale to 2050 nodes, launch the enviro-node kit for sale, and stabilize all services for production load.
### Milestones
| # | Deliverable | Success Criteria |
|---|-------------|------------------|
| 3.1 | **Enviro-Node Kit v1.0** | Complete BOM, assembly manual, packaging, and first production run of 50 kits. |
| 3.2 | **Certification (FCC / CE / ISED)** | Modular radio approval + EMC testing complete for intended markets. |
| 3.3 | **Gateway Delivery ACKs** | Full delivery tracking including node-level ACKs (queued → transmitted → delivered). |
| 3.4 | **Auto-Provisioning** | New kits can self-onboard to the network via QR code + smartphone app without manual admin intervention. |
| 3.5 | **Horizontal Scaling** | Backend services run on 3+ hosts; database read replicas configured. |
| 3.6 | **Community Infrastructure Nodes** | Documented process for volunteers to host bridge nodes; at least 3 community-hosted bridges online. |
| 3.7 | **Production Launch** | 2050 active enviro-nodes; 100+ paying subscribers; public weather dashboard live. |
### Phase 3 Metrics
- Enviro-node uptime (annual avg): >90%
- Data delivery success rate: >98%
- Web-to-mesh delivery time: <5 minutes
- Subscriber churn: <5% monthly
- Gross margin on kits: >30% (to fund network expansion)
---
## Phase 4: Ecosystem & Resilience ("The Continuum")
**Duration:** 12+ months (ongoing)
**Goal:** KosmoConnect becomes a platform. Other communities fork the stack, build compatible nodes, and participate in the wider mesh ecosystem.
### Milestones
| # | Deliverable | Success Criteria |
|---|-------------|------------------|
| 4.1 | **Public API & Webhooks** | Developers can query weather data and receive mesh events via webhooks (Sanctuary tier). |
| 4.2 | **Mesh Federation** | Interoperability experiments with other Meshtastic community networks; shared routing where appropriate. |
| 4.3 | **Enviro-Node v2.0** | nRF52840-based redesign with 50% lower sleep current and 50% smaller enclosure. |
| 4.4 | **Mobile App** | Native iOS/Android app for messaging and node management (extends or replaces web client). |
| 4.5 | **Disaster Response Integration** | Partnership with local emergency management to use the mesh for alert broadcasting during grid outages. |
| 4.6 | **Open Continuum Archive** | All historical designs, firmware, and docs mirrored to IPFS or equivalent durable storage. |
### Phase 4 Metrics
- Forks / derivative projects: >5 active
- Nodes on the mesh (including non-Church-of-Kosmo builds): >200
- Subscribers: >1,000
- Uptime during a documented emergency event: Mesh remains locally functional for >72 hours without internet
---
## Continuous Workstreams
These activities run in parallel across all phases:
| Workstream | Activities |
|------------|------------|
| **Community** | Discord/forum moderation, kit build-along events, contributor onboarding |
| **Documentation** | API docs, assembly guides, troubleshooting wikis, video tutorials |
| **Legal & Compliance** | Privacy policy updates, RF compliance in new regions, trademark guidance |
| **Finance** | Grant applications, kit pricing reviews, subscriber retention analysis |
| **Security** | Penetration testing of gateway, firmware signing, supply-chain verification |
---
## Dependencies & Risk Mitigation
| Risk | Mitigation |
|------|------------|
| Solar power insufficient in winter | Oversize battery + panel in v1; aggressive sleep optimization in v2 |
| Meshtastic protocol changes break custom module | Pin to stable releases; maintain a small upstream contribution relationship |
| Regulatory certification costs exceed budget | Use pre-certified radio modules; start with one market (e.g., USA) |
| Subscriber growth slower than expected | Double down on kit sales and community node sponsorships |
| Cloud infrastructure fails during emergency | Multiple community-hosted bridges; mesh functions locally regardless |
---
## How to Read This Roadmap
- **Phases are sequential** but overlapping. For example, kit design (Phase 3) begins before Phase 2 ends.
- **Milestones are negotiable.** If a technical discovery in Phase 1 invalidates the PCB approach, Phase 2 slips to accommodate the pivot.
- **Metrics are targets, not guarantees.** They exist to focus effort and signal when to ask for help.
> *"Through openness, we preserve. Through preservation, we evolve. Through evolution, we return."*

95
firmware/README.md Normal file
View File

@@ -0,0 +1,95 @@
# Firmware
This directory contains all embedded software for KosmoConnect edge devices.
## Structure
```
firmware/
├── enviro-node/ # Firmware for solar-powered monitoring stations
│ ├── src/ # Main application source
│ ├── lib/ # Internal libraries
│ ├── meshtastic-patch/ # Patches or modules for Meshtastic firmware
│ └── tests/ # Unit tests (native/emu)
├── infrastructure-node/ # Software for bridge devices
│ ├── bridge-daemon/ # Python daemon running on bridge host
│ └── firmware/ # Any custom bridge device firmware
└── shared-libs/ # Libraries shared across both nodes
├── packet-format/ # Binary serialization for enviro packets
└── power-manager/ # Common power management utilities
```
## Enviro-Node Firmware
### Strategy
There are two architectural approaches:
#### A. Meshtastic Fork with Custom Module
Fork the official Meshtastic firmware and add a `kosmo_enviro` module that:
- Runs on the secondary CPU core or as a low-priority thread
- Interfaces with I2C/SPI sensors
- Manages the local data buffer
- Formats and injects data packets into the mesh router
**Pros**: Tight integration, single binary, leverages mature mesh stack
**Cons**: Build complexity, upstream sync overhead, limited to supported chipsets
#### B. Companion MCU Architecture
Use a dedicated sensor MCU (e.g., ESP32-S3 or STM32L4) that talks to a Meshtastic module (e.g., RAK4631 or T-Beam) via UART.
**Pros**: Complete isolation of concerns, easier sensor debugging, can use any MCU
**Cons**: More hardware, more power draw, inter-board communication complexity
**Decision**: Start with **Approach A** (Meshtastic fork with custom module) on ESP32-S3. This keeps the kit BOM simple and the software stack unified.
### Key Modules
1. **Sensor Manager**
- Abstracts BME680, SPS30, anemometer
- Handles sensor warmup, error recovery, calibration
2. **Data Logger**
- Ring buffer in SPIFFS / LittleFS on flash
- CRC-protected records
- Wear leveling
3. **Mesh Injector**
- Formats `kosmo_enviro_packet_t` into a Meshtastic `Data` payload
- Schedules transmissions during low-congestion windows
- Respects duty cycle limits
4. **Power Manager**
- Deep sleep orchestration
- Dynamic interval scaling based on battery voltage
- Solar charging state monitoring
5. **Config Manager**
- Persistent settings (intervals, sensor enable flags, channel keys)
- Remote config via Meshtastic admin messages
## Infrastructure Node Software
### Bridge Daemon
A Python daemon (`infrastructure-node/bridge-daemon/`) that runs on a Linux host (Raspberry Pi, etc.) connected to a Meshtastic device via USB/serial.
**Responsibilities**:
- Connect to Meshtastic device via `meshtastic` Python API
- Listen for environmental data packets and publish to cloud MQTT
- Subscribe to cloud MQTT topics and inject messages into the mesh
- Monitor device health and report bridge status
- Support multiple backhaul transports (WiFi, Ethernet, LTE)
See [`infrastructure-node/bridge-daemon/README.md`](./infrastructure-node/bridge-daemon/README.md) for full setup instructions and RPi installation guide.
### Runtime
- Python 3.10+
- `meshtastic` library
- `paho-mqtt`
- `systemd` service file for auto-start
## Build System
- **Enviro-Node**: PlatformIO with custom board definition
- **Bridge Daemon**: Poetry or `pip` with `requirements.txt`
- **CI**: GitHub Actions for firmware builds, flash size checks, and unit tests

View File

@@ -0,0 +1,43 @@
# Infrastructure Node
This directory contains the software for bridge nodes that connect the Meshtastic mesh to the internet.
## Components
### `bridge-daemon/`
The **KosmoConnect Bridge Daemon** is a Python service that runs on a Raspberry Pi (or any Linux host) connected to a Meshtastic device via USB.
**What it does:**
- Receives mesh packets and publishes them to the cloud MQTT broker
- Detects environmental data (JSON enviro packets) and routes them to `kosmo/ingest/enviro`
- Subscribes to `kosmo/mesh/outbound/#` and injects web messages back into the mesh
- Runs as a `systemd` service with auto-restart
See [`bridge-daemon/README.md`](./bridge-daemon/README.md) for setup and RPi installation instructions.
### `firmware/`
Placeholder for any custom firmware required specifically for the bridge radio (usually not needed; standard Meshtastic firmware works).
## Recommended Bridge Hardware
| Device | Notes |
|--------|-------|
| **LILYGO T-Beam 868/915MHz** | Great for Pi USB bridge; ESP32 + GPS + LoRa |
| **RAK4631** | Low power, nRF52840 based |
| **LILYGO T-Deck** | Can work if put into USB-serial mode; better used as a handheld mesh client |
## Quick Test
If you have a T-Beam or T-Deck connected to your computer:
```bash
cd bridge-daemon
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
export MESHTASTIC_DEVICE=/dev/ttyUSB0 # adjust for your OS
export MQTT_HOST=localhost
python3 -m src.main
```
Send a text from any mesh node and watch it appear on `kosmo/mesh/inbound` in your MQTT broker.

View File

@@ -0,0 +1,175 @@
# KosmoConnect Bridge Daemon
A Python daemon that bridges a local Meshtastic mesh network to the KosmoConnect cloud MQTT broker.
## What It Does
1. **Mesh → Cloud**
- Receives text messages from the mesh and publishes them to `kosmo/mesh/inbound`
- Detects JSON environmental data packets (sent as text) and forwards them to `kosmo/ingest/enviro`
- Forwards position updates to `kosmo/position/position`
- *(Future)* Will handle custom `KOSMO_ENVIRO_APP` portnum packets from enviro-node firmware
2. **Cloud → Mesh**
- Subscribes to `kosmo/mesh/outbound/#`
- Injects text messages into the mesh, tagged with `[Web]` prefix so users know the origin
## Hardware Requirements
- **Raspberry Pi** (3B+ or 4 recommended) running Raspberry Pi OS or similar Debian-based Linux
- **Meshtastic device** with USB-serial interface (e.g., LILYGO T-Beam, RAK4631, or your T-Deck in USB-serial mode) connected via USB
- Reliable internet backhaul (WiFi or Ethernet)
## Quick Start (Local Dev)
You can test the daemon on your laptop without a Pi by using the Mosquitto broker from the backend stack:
```bash
cd firmware/infrastructure-node/bridge-daemon
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Option A: USB serial device
export MESHTASTIC_DEVICE=/dev/ttyUSB0 # or /dev/ttyACM0 on Linux, COM3 on Windows, /dev/cu.usbserial-* on macOS
export MQTT_HOST=localhost
export MQTT_PORT=1883
python3 -m src.main
# Option B: Network-connected device (e.g., T-Deck on WiFi)
export MESHTASTIC_HOST=192.168.1.45
export MESHTASTIC_TCP_PORT=4403
export MQTT_HOST=localhost
export MQTT_PORT=1883
python3 -m src.main
```
## Raspberry Pi Production Setup
### 1. Install Dependencies
```bash
sudo apt update
sudo apt install -y python3-venv python3-pip git
```
### 2. Clone / Copy This Directory to the Pi
```bash
cd /opt
sudo git clone https://your-repo/kosmo-connect.git
# or rsync the bridge-daemon folder
```
### 3. Run the Installer
```bash
cd /opt/kosmo-connect/firmware/infrastructure-node/bridge-daemon
sudo ./install.sh
```
### 4. Configure the Service
Edit the systemd service to point to your actual MQTT broker:
```bash
sudo systemctl edit --full kosmo-bridge
```
Update the `Environment=` lines, for example:
```ini
Environment="MQTT_HOST=your-broker.example.com"
Environment="MQTT_PORT=1883"
Environment="MQTT_USER=kosmo"
Environment="MQTT_PASS=your_secure_password"
Environment="MESHTASTIC_DEVICE=/dev/ttyUSB0"
Environment="GATEWAY_NODE_ID=!yourgateway01"
```
Save and reload:
```bash
sudo systemctl daemon-reload
sudo systemctl restart kosmo-bridge
```
### 5. Monitor Logs
```bash
sudo journalctl -u kosmo-bridge -f
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `MQTT_HOST` | `localhost` | MQTT broker hostname |
| `MQTT_PORT` | `1883` | MQTT broker port |
| `MQTT_USER` | *(empty)* | MQTT username |
| `MQTT_PASS` | *(empty)* | MQTT password |
| `MESHTASTIC_DEVICE` | `/dev/ttyUSB0` | Serial path to the Meshtastic radio (used when `MESHTASTIC_HOST` is empty) |
| `MESHTASTIC_HOST` | *(empty)* | IP address or hostname of a network-connected Meshtastic device |
| `MESHTASTIC_TCP_PORT` | `4403` | TCP port for the Meshtastic network API |
| `GATEWAY_NODE_ID` | *(empty)* | Identifier for this bridge in the cloud |
## Finding the Serial Port
On the Pi, plug in your T-Beam or T-Deck and run:
```bash
ls -l /dev/ttyUSB* /dev/ttyACM* /dev/serial/by-id/
```
Use the path that appears when the device is connected.
## Using T-Deck Over WiFi (No USB Cable)
Your T-Deck can connect to your home WiFi and expose the Meshtastic TCP API on port `4403`.
### 1. Enable WiFi on the T-Deck
Using the Meshtastic app or CLI:
```bash
meshtastic --host <t-deck-ip> --set wifi_ssid "YourNetwork" --set wifi_psk "YourPassword"
```
Or via the on-screen menu if your T-Deck firmware supports it.
### 2. Find the T-Deck IP Address
Check your router's DHCP client list, or use a network scanner:
```bash
nmap -p 4403 192.168.1.0/24
```
### 3. Run the Bridge in TCP Mode
```bash
export MESHTASTIC_HOST=192.168.1.45
export MESHTASTIC_TCP_PORT=4403
export MQTT_HOST=your-broker.example.com
python3 -m src.main
```
The bridge will connect over TCP instead of USB-serial. This is perfect for keeping the T-Deck portable while the Pi sits near your router.
## Testing with T-Deck
When the bridge is running (serial or TCP):
1. Send a text message from any mesh node
2. Check `kosmo/mesh/inbound` on your MQTT broker — the message should appear within seconds
3. Publish a message to `kosmo/mesh/outbound/!{target_node_id}` — the target node should receive it prefixed with `[Web]`
## Troubleshooting
- **Permission denied on `/dev/ttyUSB0`**: Add the `kosmo` user to the `dialout` group:
```bash
sudo usermod -a -G dialout kosmo
sudo systemctl restart kosmo-bridge
```
- **MQTT connection refused**: Verify your broker is reachable from the Pi (`nc -vz MQTT_HOST MQTT_PORT`)
- **Meshtastic device not found over serial**: Check cables and power supply; some devices need a powered USB hub on the Pi
- **T-Deck TCP connection refused**: Ensure the T-Deck is on the same network and the TCP API port (4403) is not blocked by a firewall

View File

@@ -0,0 +1,44 @@
#!/usr/bin/env bash
set -euo pipefail
# One-command deploy of the KosmoConnect Bridge Daemon to a Raspberry Pi
# Run this script from your dev machine. Requires ssh access to the Pi.
PI_HOST="${1:-}"
PI_USER="${2:-pi}"
INSTALL_DIR="/opt/kosmo-bridge"
if [ -z "$PI_HOST" ]; then
echo "Usage: ./deploy-pi.sh <pi-hostname-or-ip> [pi-user]"
echo "Example: ./deploy-pi.sh 192.168.1.50 pi"
exit 1
fi
echo "=== Deploying KosmoConnect Bridge Daemon to $PI_USER@$PI_HOST ==="
# 1. Ensure target directory exists
ssh "$PI_USER@$PI_HOST" "sudo mkdir -p $INSTALL_DIR && sudo chown $PI_USER:$PI_USER $INSTALL_DIR"
# 2. Sync source files
rsync -avz --delete \
src/ \
requirements.txt \
kosmo-bridge.service \
install.sh \
"$PI_USER@$PI_HOST:$INSTALL_DIR/"
# 3. Run installer remotely
ssh "$PI_USER@$PI_HOST" "cd $INSTALL_DIR && sudo ./install.sh"
echo ""
echo "==========================================="
echo "Deployment complete."
echo ""
echo "Next steps on the Pi:"
echo " ssh $PI_USER@$PI_HOST"
echo " sudo systemctl edit --full kosmo-bridge"
echo " # Set MQTT_HOST, MESHTASTIC_HOST, etc."
echo " sudo systemctl daemon-reload"
echo " sudo systemctl restart kosmo-bridge"
echo " sudo journalctl -u kosmo-bridge -f"
echo "==========================================="

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
set -euo pipefail
# KosmoConnect Bridge Daemon Installer for Raspberry Pi
# Run this script as root (or with sudo)
INSTALL_DIR="/opt/kosmo-bridge"
SERVICE_FILE="kosmo-bridge.service"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root (e.g., sudo ./install.sh)"
exit 1
fi
echo "=== KosmoConnect Bridge Installer ==="
# 1. Create user
if ! id -u kosmo &>/dev/null; then
echo "Creating kosmo user..."
useradd --system --no-create-home --home-dir "$INSTALL_DIR" kosmo
fi
# 2. Install directory
echo "Setting up $INSTALL_DIR ..."
mkdir -p "$INSTALL_DIR"
cp -r src "$INSTALL_DIR/"
chown -R kosmo:kosmo "$INSTALL_DIR"
# 3. Python virtual environment
echo "Creating Python venv..."
python3 -m venv "$INSTALL_DIR/venv"
"$INSTALL_DIR/venv/bin/pip" install --upgrade pip
"$INSTALL_DIR/venv/bin/pip" install -r requirements.txt
# 4. Systemd service
echo "Installing systemd service..."
cp "$SERVICE_FILE" /etc/systemd/system/
systemctl daemon-reload
systemctl enable kosmo-bridge.service
echo ""
echo "==========================================="
echo "Installation complete."
echo ""
echo "Before starting the service, edit:"
echo " /etc/systemd/system/kosmo-bridge.service"
echo "to set your MQTT_HOST, MQTT_USER, MQTT_PASS, etc."
echo ""
echo "For network-connected devices (e.g., T-Deck over WiFi), uncomment:"
echo " Environment=\"MESHTASTIC_HOST=192.168.1.45\""
echo " Environment=\"MESHTASTIC_TCP_PORT=4403\""
echo "and comment out MESHTASTIC_DEVICE."
echo ""
echo "Then run:"
echo " sudo systemctl start kosmo-bridge"
echo " sudo systemctl status kosmo-bridge"
echo " sudo journalctl -u kosmo-bridge -f"
echo "==========================================="

View File

@@ -0,0 +1,22 @@
[Unit]
Description=KosmoConnect Bridge Daemon
After=network.target
[Service]
Type=simple
User=kosmo
Group=kosmo
WorkingDirectory=/opt/kosmo-bridge
Environment="PYTHONUNBUFFERED=1"
Environment="MQTT_HOST=mqtt.kosmoconnect.example"
Environment="MQTT_PORT=1883"
Environment="MESHTASTIC_DEVICE=/dev/ttyUSB0"
# Environment="MESHTASTIC_HOST=192.168.1.45"
# Environment="MESHTASTIC_TCP_PORT=4403"
Environment="GATEWAY_NODE_ID=!gateway01"
ExecStart=/opt/kosmo-bridge/venv/bin/python -m src.main
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,3 @@
meshtastic>=2.3.8
paho-mqtt>=2.1.0
pyserial>=3.5

View File

@@ -0,0 +1,15 @@
import os
MQTT_HOST = os.getenv("MQTT_HOST", "localhost")
MQTT_PORT = int(os.getenv("MQTT_PORT", "1883"))
MQTT_USER = os.getenv("MQTT_USER", "")
MQTT_PASS = os.getenv("MQTT_PASS", "")
MQTT_TOPIC_INGEST = os.getenv("MQTT_TOPIC_INGEST", "kosmo/ingest/enviro")
MQTT_TOPIC_INBOUND = os.getenv("MQTT_TOPIC_INBOUND", "kosmo/mesh/inbound")
MQTT_TOPIC_OUTBOUND_PREFIX = os.getenv("MQTT_TOPIC_OUTBOUND_PREFIX", "kosmo/mesh/outbound")
MESHTASTIC_DEVICE = os.getenv("MESHTASTIC_DEVICE", "/dev/ttyUSB0")
MESHTASTIC_HOST = os.getenv("MESHTASTIC_HOST", "")
MESHTASTIC_TCP_PORT = int(os.getenv("MESHTASTIC_TCP_PORT", "4403"))
GATEWAY_NODE_ID = os.getenv("GATEWAY_NODE_ID", "")

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
KosmoConnect Bridge Daemon
Runs on a Raspberry Pi (or similar) connected to a Meshtastic device via USB.
Bridges the local mesh to the cloud MQTT broker:
- Mesh -> MQTT: forwards enviro data and general mesh messages
- MQTT -> Mesh: injects outbound messages from the cloud gateway
"""
import json
import logging
import os
import sys
import time
import uuid
from datetime import datetime, timezone
# Add src to path when running directly
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from meshtastic_client import MeshtasticClient
from mqtt_client import MqttClient
import config # noqa: F401
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s: %(message)s",
)
logger = logging.getLogger("bridge")
class BridgeDaemon:
def __init__(self):
self.mesh = MeshtasticClient(self._on_mesh_packet)
self.mqtt = MqttClient(self._on_mqtt_message)
def _extract_node_id(self, packet: dict) -> str:
"""Best-effort extraction of source node ID from a mesh packet."""
from_id = packet.get("fromId")
if from_id:
return from_id
from_num = packet.get("from")
if from_num is not None:
return f"!{from_num:08x}"
return "unknown"
def _extract_hop_count(self, packet: dict) -> int:
rx_snr = packet.get("rxSnr")
# hopLimit can indicate hops remaining; approximate hop count
hop_limit = packet.get("hopLimit", 7)
# If hopStart is present, hop_count = hopStart - hopLimit
hop_start = packet.get("hopStart", 7)
return max(0, hop_start - hop_limit)
def _extract_rssi_snr(self, packet: dict):
return packet.get("rxRssi"), packet.get("rxSnr")
def _on_mesh_packet(self, packet: dict):
logger.debug("Mesh packet received: %s", packet)
decoded = packet.get("decoded", {})
portnum = decoded.get("portnum")
payload = decoded.get("payload", b"")
# Convert bytes payload to string if needed
if isinstance(payload, bytes):
try:
payload_str = payload.decode("utf-8")
except UnicodeDecodeError:
payload_str = None
else:
payload_str = str(payload) if payload else None
# ---------------------------------------------------------
# 1. Custom enviro packets (future firmware path)
# ---------------------------------------------------------
# TODO: when custom firmware is ready, match against a custom portnum
# if portnum == "KOSMO_ENVIRO_APP":
# self._forward_enviro(packet, payload)
# return
# ---------------------------------------------------------
# 2. Text fallback that happens to be JSON enviro data
# (useful for testing with generic Meshtastic devices)
# ---------------------------------------------------------
if portnum == "TEXT_MESSAGE_APP" and payload_str:
stripped = payload_str.strip()
if stripped.startswith('{"type": "enviro_reading"') or stripped.startswith("{\"type\":\"enviro_reading\""):
try:
enviro = json.loads(stripped)
enviro["received_at"] = datetime.now(timezone.utc).isoformat()
enviro.setdefault("node_id", self._extract_node_id(packet))
enviro.setdefault("hop_count", self._extract_hop_count(packet))
self.mqtt.publish(config.MQTT_TOPIC_INGEST, enviro)
logger.info("Forwarded enviro JSON from text packet for %s", enviro.get("node_id"))
return
except json.JSONDecodeError:
pass
# General mesh text message -> inbound gateway
gateway_id = config.GATEWAY_NODE_ID
if not gateway_id and self.mesh.iface and hasattr(self.mesh.iface, 'myInfo'):
my_num = getattr(self.mesh.iface.myInfo, 'my_node_num', None)
if my_num is not None:
gateway_id = f"!{my_num:08x}"
inbound = {
"message_id": str(uuid.uuid4()),
"source_node_id": self._extract_node_id(packet),
"gateway_node_id": gateway_id or "",
"text": stripped,
"hop_count": self._extract_hop_count(packet),
"rssi": self._extract_rssi_snr(packet)[0],
"snr": self._extract_rssi_snr(packet)[1],
"received_at": datetime.now(timezone.utc).isoformat(),
}
self.mqtt.publish(config.MQTT_TOPIC_INBOUND, inbound)
logger.info("Forwarded inbound text from %s", inbound["source_node_id"])
return
# ---------------------------------------------------------
# 3. Position packets -> could update node location in cloud
# ---------------------------------------------------------
if portnum == "POSITION_APP":
pos = decoded.get("position", {})
if pos.get("latitude") and pos.get("longitude"):
update = {
"type": "position_update",
"node_id": self._extract_node_id(packet),
"lat": pos["latitude"],
"lon": pos["longitude"],
"altitude": pos.get("altitude"),
"received_at": datetime.now(timezone.utc).isoformat(),
}
# Publish to a dedicated topic or reuse ingest with a different type
# For now, we publish to ingest topic so the backend can optionally handle it
self.mqtt.publish(config.MQTT_TOPIC_INGEST.replace("enviro", "position"), update)
logger.info("Forwarded position update from %s", update["node_id"])
return
logger.debug("Ignored packet with portnum=%s", portnum)
def _on_mqtt_message(self, topic: str, payload: str):
logger.debug("MQTT message on %s: %s", topic, payload)
prefix = config.MQTT_TOPIC_OUTBOUND_PREFIX + "/"
if not topic.startswith(prefix):
return
destination_id = topic[len(prefix):]
if not destination_id:
logger.warning("Outbound MQTT topic missing destination node ID: %s", topic)
return
try:
data = json.loads(payload)
except json.JSONDecodeError:
logger.error("Invalid JSON in outbound MQTT message on %s", topic)
return
text = data.get("text", "")
if not text:
logger.warning("Empty text in outbound MQTT message on %s", topic)
return
# Tag messages originating from the gateway so mesh users know it's from the web
tagged_text = f"[Web] {text}"
success = self.mesh.send_text(tagged_text, destination_id=destination_id)
if success:
logger.info("Injected outbound message to %s", destination_id)
else:
logger.error("Failed to inject outbound message to %s", destination_id)
def run(self):
logger.info("Starting KosmoConnect Bridge Daemon")
self.mesh.start()
self.mqtt.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
logger.info("Shutting down...")
finally:
self.mesh.stop()
self.mqtt.stop()
def main():
daemon = BridgeDaemon()
daemon.run()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,75 @@
import logging
import time
import threading
from meshtastic.serial_interface import SerialInterface
from meshtastic.tcp_interface import TCPInterface
import config
logger = logging.getLogger("bridge.meshtastic")
class MeshtasticClient:
def __init__(self, on_packet_callback):
self.on_packet_callback = on_packet_callback
self.iface = None
self._running = True
self._thread = None
def _connect(self):
while self._running:
try:
if config.MESHTASTIC_HOST:
logger.info("Connecting to Meshtastic TCP host %s:%s", config.MESHTASTIC_HOST, config.MESHTASTIC_TCP_PORT)
self.iface = TCPInterface(hostname=config.MESHTASTIC_HOST, portNumber=config.MESHTASTIC_TCP_PORT)
else:
logger.info("Connecting to Meshtastic serial device %s", config.MESHTASTIC_DEVICE)
self.iface = SerialInterface(devPath=config.MESHTASTIC_DEVICE)
self.iface.onReceive = self._on_receive
# Try to read our own node ID
my_info = getattr(self.iface, 'myInfo', None)
if my_info:
logger.info("Connected. My node ID: %s", getattr(my_info, 'my_node_num', 'unknown'))
else:
logger.info("Connected to Meshtastic device.")
return
except Exception as e:
logger.error("Failed to connect to Meshtastic: %s. Retrying in 5s...", e)
time.sleep(5)
def _on_receive(self, packet, interface):
try:
self.on_packet_callback(packet)
except Exception as e:
logger.exception("Error handling mesh packet: %s", e)
def start(self):
self._thread = threading.Thread(target=self._connect, daemon=True)
self._thread.start()
def stop(self):
self._running = False
if self.iface:
try:
self.iface.close()
except Exception:
pass
def send_text(self, text: str, destination_id: str = None, channel_index: int = 0):
if not self.iface:
logger.warning("Meshtastic not connected, cannot send text")
return False
try:
logger.info("Sending text to %s: %s", destination_id or "broadcast", text)
self.iface.sendText(
text=text,
destinationId=destination_id,
channelIndex=channel_index,
wantAck=True,
)
return True
except Exception as e:
logger.exception("Failed to send text: %s", e)
return False

View File

@@ -0,0 +1,66 @@
import json
import logging
import threading
import uuid
import paho.mqtt.client as mqtt
import config
logger = logging.getLogger("bridge.mqtt")
class MqttClient:
def __init__(self, on_message_callback):
self.on_message_callback = on_message_callback
self.client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
if config.MQTT_USER:
self.client.username_pw_set(config.MQTT_USER, config.MQTT_PASS)
self.client.on_connect = self._on_connect
self.client.on_message = self._on_message
self.client.on_disconnect = self._on_disconnect
self._connected = False
def _on_connect(self, client, userdata, flags, rc, properties=None):
if rc == 0:
self._connected = True
logger.info("MQTT connected to %s:%s", config.MQTT_HOST, config.MQTT_PORT)
topic = f"{config.MQTT_TOPIC_OUTBOUND_PREFIX}/#"
client.subscribe(topic)
logger.info("Subscribed to %s", topic)
else:
logger.error("MQTT connection failed with code %s", rc)
def _on_disconnect(self, client, userdata, disconnect_flags, rc, properties=None):
self._connected = False
logger.warning("MQTT disconnected (rc=%s). Reconnecting...", rc)
def _on_message(self, client, userdata, msg):
try:
self.on_message_callback(msg.topic, msg.payload.decode("utf-8"))
except Exception as e:
logger.exception("Error handling MQTT message: %s", e)
def start(self):
def _loop():
while True:
try:
self.client.connect(config.MQTT_HOST, config.MQTT_PORT, 60)
self.client.loop_forever(retry_first_connection=True)
except Exception as e:
logger.error("MQTT loop error: %s. Reconnecting in 5s...", e)
time.sleep(5)
import time
t = threading.Thread(target=_loop, daemon=True)
t.start()
def stop(self):
self.client.disconnect()
def publish(self, topic: str, payload: dict):
try:
self.client.publish(topic, json.dumps(payload))
logger.debug("Published to %s", topic)
except Exception as e:
logger.exception("Failed to publish to %s: %s", topic, e)

View File

@@ -0,0 +1,201 @@
#!/usr/bin/env python3
"""
Integration test for the Bridge Daemon.
Requires a running MQTT broker on localhost:1883 (e.g., Mosquitto from backend docker-compose).
"""
import json
import os
import sys
import time
import unittest
from datetime import datetime, timezone
from unittest.mock import MagicMock, patch
import paho.mqtt.client as mqtt
# Ensure src is importable
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "src"))
os.environ.setdefault("MQTT_HOST", "localhost")
os.environ.setdefault("MQTT_PORT", "1883")
os.environ.setdefault("MESHTASTIC_DEVICE", "/dev/fake")
os.environ.setdefault("MESHTASTIC_HOST", "")
os.environ.setdefault("GATEWAY_NODE_ID", "!testgateway")
from main import BridgeDaemon
import config
class TestBridgeDaemon(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.mqtt_client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
cls.mqtt_client.connect("localhost", 1883, 60)
cls.mqtt_client.loop_start()
cls.inbound_msgs = []
cls.ingest_msgs = []
def on_message(client, userdata, msg):
payload = json.loads(msg.payload.decode())
if msg.topic == "kosmo/mesh/inbound":
cls.inbound_msgs.append(payload)
elif msg.topic == "kosmo/ingest/enviro":
cls.ingest_msgs.append(payload)
cls.mqtt_client.on_message = on_message
cls.mqtt_client.subscribe("kosmo/mesh/inbound")
cls.mqtt_client.subscribe("kosmo/ingest/enviro")
@classmethod
def tearDownClass(cls):
cls.mqtt_client.loop_stop()
cls.mqtt_client.disconnect()
def test_outbound_mesh_injection(self):
"""MQTT -> Mesh: outbound message should trigger sendText on the mock interface."""
mock_iface = MagicMock()
mock_iface.myInfo = MagicMock()
mock_iface.myInfo.my_node_num = 0xDEADBEEF
with patch("main.MeshtasticClient") as MockMesh:
instance = MockMesh.return_value
instance.iface = mock_iface
instance.start = MagicMock()
instance.stop = MagicMock()
instance.send_text = MagicMock(return_value=True)
daemon = BridgeDaemon()
daemon.mesh = instance
daemon.mqtt.start()
time.sleep(1)
# Publish outbound message
outbound = {"message_id": "msg-123", "text": "Hello mesh"}
self.mqtt_client.publish("kosmo/mesh/outbound/!a1b2c3d4", json.dumps(outbound))
time.sleep(1.5)
instance.send_text.assert_called()
args, kwargs = instance.send_text.call_args
self.assertIn("[Web] Hello mesh", args[0])
self.assertEqual(kwargs.get("destination_id"), "!a1b2c3d4")
daemon.mqtt.stop()
def test_inbound_mesh_to_mqtt(self):
"""Mesh -> MQTT: text packet from mesh should appear on kosmo/mesh/inbound."""
mock_iface = MagicMock()
mock_iface.myInfo = MagicMock()
mock_iface.myInfo.my_node_num = 0xDEADBEEF
with patch("main.MeshtasticClient") as MockMesh:
instance = MockMesh.return_value
instance.iface = mock_iface
instance.start = MagicMock()
instance.stop = MagicMock()
daemon = BridgeDaemon()
daemon.mesh = instance
daemon.mqtt.start()
time.sleep(0.5)
# Simulate mesh text packet
packet = {
"fromId": "!deadbeef",
"decoded": {
"portnum": "TEXT_MESSAGE_APP",
"payload": b"Hello from the woods",
},
"hopLimit": 5,
"hopStart": 7,
"rxRssi": -88,
"rxSnr": 9.5,
}
daemon._on_mesh_packet(packet)
time.sleep(1)
daemon.mqtt.stop()
# Give MQTT a moment to flush
time.sleep(0.5)
self.assertTrue(
any(m.get("source_node_id") == "!deadbeef" and m.get("text") == "Hello from the woods" for m in self.inbound_msgs),
f"Expected inbound message not found. Got: {self.inbound_msgs}"
)
def test_enviro_json_text_fallback(self):
"""Mesh -> MQTT: text packet containing enviro JSON should be routed to kosmo/ingest/enviro."""
mock_iface = MagicMock()
mock_iface.myInfo = MagicMock()
mock_iface.myInfo.my_node_num = 0xCAFEBABE
with patch("main.MeshtasticClient") as MockMesh:
instance = MockMesh.return_value
instance.iface = mock_iface
instance.start = MagicMock()
instance.stop = MagicMock()
daemon = BridgeDaemon()
daemon.mesh = instance
daemon.mqtt.start()
time.sleep(0.5)
enviro_payload = {
"type": "enviro_reading",
"node_id": "!enviro01",
"payload": {
"temperature_c": 21.5,
"humidity_percent": 55.0,
},
}
packet = {
"fromId": "!enviro01",
"decoded": {
"portnum": "TEXT_MESSAGE_APP",
"payload": json.dumps(enviro_payload).encode(),
},
"hopLimit": 3,
"hopStart": 5,
}
daemon._on_mesh_packet(packet)
time.sleep(1)
daemon.mqtt.stop()
time.sleep(0.5)
self.assertTrue(
any(m.get("node_id") == "!enviro01" and m.get("payload", {}).get("temperature_c") == 21.5 for m in self.ingest_msgs),
f"Expected ingest message not found. Got: {self.ingest_msgs}"
)
class TestMeshtasticClientConnection(unittest.TestCase):
def test_serial_interface_used_by_default(self):
with patch("meshtastic_client.SerialInterface") as MockSerial, \
patch("meshtastic_client.TCPInterface") as MockTCP:
from meshtastic_client import MeshtasticClient
client = MeshtasticClient(lambda p: None)
client._connect()
MockSerial.assert_called_once_with(devPath="/dev/fake")
MockTCP.assert_not_called()
client.stop()
def test_tcp_interface_used_when_host_set(self):
with patch.dict(os.environ, {"MESHTASTIC_HOST": "192.168.1.45", "MESHTASTIC_TCP_PORT": "4403"}, clear=False):
# re-import config to pick up env change
import importlib
import config as cfg_module
importlib.reload(cfg_module)
with patch("meshtastic_client.SerialInterface") as MockSerial, \
patch("meshtastic_client.TCPInterface") as MockTCP:
from meshtastic_client import MeshtasticClient
client = MeshtasticClient(lambda p: None)
client._connect()
MockTCP.assert_called_once_with(hostname="192.168.1.45", portNumber=4403)
MockSerial.assert_not_called()
client.stop()
if __name__ == "__main__":
unittest.main()

79
hardware/README.md Normal file
View File

@@ -0,0 +1,79 @@
# Hardware
This directory contains all hardware designs, schematics, BOMs, and mechanical files for the KosmoConnect project.
## Directory Structure
```
hardware/
├── enviro-node/ # Solar-powered environmental station
│ ├── pcb/ # KiCad / EasyEDA project files
│ ├── enclosure/ # 3D models, STL files, mounting hardware
│ ├── sensors/ # Sensor datasheets, integration notes
│ ├── power/ # Solar panel, battery, charge controller specs
│ └── bom/ # Bill of materials, sourcing guides
├── infrastructure-node/ # Bridge node hardware (may be off-the-shelf)
│ ├── off-the-shelf/ # Recommended COTS devices (T-Beam, RAK, etc.)
│ └── custom/ # Optional custom bridge PCB designs
└── common/ # Shared libraries, footprints, symbols
├── kicad-libs/
└── 3d-models/
```
## Enviro-Node Hardware Overview
### Brain
- **Option A**: ESP32-S3-WROOM-1 (dual-core, WiFi/BT if needed, good dev ecosystem)
- **Option B**: nRF52840 (lower power, native Meshtastic support, harder dev)
**Decision**: Start with ESP32-S3 for rapid development; evaluate nRF52840 for v2 power optimization.
### Radio
- **Semtech SX1262** (915 MHz NA / 868 MHz EU / 433 MHz regions)
- TCXO for stability across temperatures
- External antenna (SMA or u.FL)
### Sensors (v1)
- **BME680**: Temperature, humidity, pressure, VOC/gas
- **SPS30**: Particulate matter (PM1.0, PM2.5, PM4.0, PM10)
- **Anemometer**: Davis 6410 or custom 3-cup + wind vane
- **Rain gauge**: Optional tipping bucket (WH-SP-RG)
### Power System
- **Solar Panel**: 10W monocrystalline
- **Battery**: 3.2V 32700 LiFePO4 6000mAh (safer than LiPo for outdoor unattended)
- **Charge Controller**: Custom MPPT or CN3791-based board
- **Power Management**: TPS63001 buck-boost, load switches for sensors
### Enclosure
- Polycarbonate or ABS outdoor enclosure
- Solar panel mounted on lid or external pole
- Glands for antenna and sensor cables
- Passive ventilation for accurate T/H readings
## Power Budget (v1 Estimate)
| Component | Active Current | Sleep Current | Duty Cycle | Daily mAh |
|-----------|---------------|---------------|------------|-----------|
| ESP32-S3 | 80mA | 15µA | 1% | ~22 |
| SX1262 | 10mA TX | 0.5µA | 0.1% | ~1 |
| BME680 | 3mA | 0.15µA | 1% | ~1 |
| SPS30 | 60mA | 0mA | 1% | ~15 |
| Anemometer | 0mA (passive) | 0mA | 100% | 0 |
| Quiescent | - | 0.5mA | 99% | ~12 |
| **Total** | | | | **~51 mAh/day** |
A 10W panel in winter (2h effective sun) produces ~400mAh/day. A 6000mAh battery provides ~80 days of autonomy. This is viable but tight; v2 will aggressively optimize sleep current.
## Recommended Infrastructure Node Hardware
For rapid deployment, use off-the-shelf Meshtastic devices:
- **LILYGO T-Beam 868/915MHz** + ESP32 (good for WiFi/LTE backhaul)
- **RAKwireless WisGate Edge Lite 2** (if building a dedicated gateway)
- **Raspberry Pi 4 + RAK2287 HAT** (for a more powerful Linux-based bridge)
## Design Files
- [Enviro-Node v1 BOM](./enviro-node/bom/bom-v1.md) *(placeholder)*
- [Power Budget Spreadsheet](./enviro-node/power/budget-v1.ods) *(placeholder)*
- [Enclosure Assembly Guide](./enviro-node/enclosure/assembly-v1.md) *(placeholder)*

47
kits/README.md Normal file
View File

@@ -0,0 +1,47 @@
# Kits
This directory contains everything needed to manufacture, package, and support the KosmoConnect enviro-node kit.
## Structure
```
kits/
├── enviro-node-v1/ # Assembly instructions and documentation for v1 kit
│ ├── manual/
│ │ ├── assembly.md
│ │ ├── flashing.md
│ │ ├── mounting.md
│ │ └── troubleshooting.md
│ ├── packaging/ # Box inserts, labels, QR codes
│ └── certification/ # FCC, CE, IC test reports and declarations
└── packaging/ # Generic packaging designs and supplier contacts
├── box-art/
└── supplier-notes.md
```
## Kit Contents (v1)
1. Pre-flashed ESP32-S3 + SX1262 main board
2. BME680 sensor module
3. SPS30 sensor module with cable
4. Wind sensor (anemometer + wind vane) with mounting hardware
5. Solar panel (10W) with cable
6. LiFePO4 battery (32700 cell in holder)
7. Charge controller PCB
8. Enclosure (base, lid, gasket, glands)
9. Antenna (fiberglass whip) with u.FL to SMA pigtail
10. Mounting pole brackets
11. Quick-start card with QR code to onboarding portal
## Certification Strategy
- Obtain modular certification for the radio module where possible
- Perform EMC and RF exposure testing for the complete assembled unit
- Target markets: USA (FCC Part 15), EU (CE/RED), Canada (ISED)
- Maintain technical construction files (TCF) for EU compliance
## Assembly Difficulty
Target: **Intermediate hobbyist** (can solder through-hole, crimp connectors, follow wiring diagrams). Estimated assembly time: 3-4 hours.
Future v2 may offer a **pre-assembled** option for non-technical users.

32
legal/LICENSE Normal file
View File

@@ -0,0 +1,32 @@
# Licensing Notice
KosmoConnect is a technology project of the Church of Kosmo. It is a multi-component project with different licenses applied to different parts.
## Software
All software in the `firmware/`, `backend/`, `web/`, `ops/`, and `tests/` directories is licensed under the **GNU Affero General Public License v3.0 (AGPL-3.0)** unless otherwise noted in individual files.
This means:
- You are free to use, modify, and distribute the software.
- If you run a modified version of the backend or web services on a server, you must make the source code available to users of that service.
- Any derivative works must also be under AGPL-3.0.
## Hardware
All hardware designs in the `hardware/` and `kits/` directories are licensed under the **CERN Open Hardware Licence Version 2 - Strongly Reciprocal (CERN-OHL-S-2.0)** unless otherwise noted.
This means:
- You are free to study, modify, make, and distribute the hardware designs.
- If you distribute products based on these designs, you must share the corresponding design files under the same license.
## Documentation
Documentation in the `docs/` directory is licensed under **Creative Commons Attribution-ShareAlike 4.0 International (CC-BY-SA-4.0)**.
## Exceptions
Some components may include third-party libraries or designs with their own licenses. These are noted in the respective directories.
## Commercial Use
Commercial use of the hardware designs and self-hosted software is permitted under the terms above. The hosted KosmoConnect service operated by Church of Kosmo is a separate offering and may include additional terms of service.

View File

@@ -0,0 +1,118 @@
# 🌌 The Kosmic License (KΛ 1.1-Draft)
### *A License for Knowledge in Balance and Continuity*
*(Issued by the Church of Kosmo)*
---
> **Note:** This is a draft proposal for improving the KΛ 1.0 license. It preserves the spiritual and ethical intent of the original while adding legal clarity, enforceability, and protections for both licensors and users. The Church of Kosmo may choose to adopt this draft in whole, in part, or not at all.
---
## **Preamble**
This License is founded upon the principle that **knowledge is a living element**
a gift of the cosmos meant to be shared, preserved, and evolved with compassion.
By using, sharing, or adapting any work under this License,
you join the continuum of seekers who honor **balance, truth, and stewardship**
as expressed in the *Codex of the Great Year* of the Church of Kosmo.
---
## **Article I — Definitions**
For the purposes of this License:
- **"The Work"** means the tangible material — software, hardware designs, documentation, or creative expression — to which this License is attached.
- **"You"** means any individual or entity exercising the permissions granted by this License.
- **"Derivative Work"** means any modification, translation, adaptation, or other transformation of The Work.
- **"Harm"** means direct violence against sentient beings, deliberate dissemination of known falsehoods intended to cause injury, or willful destruction of ecosystems.
- **"Compatible License"** means any license approved by the Open Source Initiative (OSI), any license listed by the Free Software Foundation (FSF) as free, or any Creative Commons license that requires ShareAlike.
---
## **Article II — Freedom to Use and Transform**
1. Subject to the terms of this License, You are free to **use, study, copy, modify, and distribute** The Work for any purpose — personal, educational, creative, or commercial.
2. The Church of Kosmo **encourages** (but does not legally require) that commercial use be aligned with ecological sustainability and ethical labor practices. This encouragement is an expression of values, not a condition of the license grant.
> *Rationale:* Making the ethical alignment a strict legal condition can render the license unenforceable in many jurisdictions due to vagueness. Expressing it as a strong social norm preserves the spirit while maintaining legal validity.
---
## **Article III — The Covenant of Attribution**
1. All Derivative Works must include the following acknowledgment in a reasonably prominent location:
> “Based on materials from the Church of Kosmo, shared under the Kosmic License (KΛ 1.1).”
2. Attribution must remain visible in digital, printed, or physical derivative forms.
3. You must not misrepresent the origin of The Work or use the name or marks of the Church of Kosmo to imply endorsement of Your Derivative Work without prior written permission.
---
## **Article IV — Stewardship and Non-Exploitation**
1. No one may claim ownership or exclusive control over any *idea, principle, or spiritual teaching* contained in The Work. The tangible expressions of The Work remain subject to copyright and other applicable law.
2. You may not use The Work to promote **Harm** (as defined in Article I).
3. The Church of Kosmo **encourages** (but does not legally require) that profits derived from The Work contribute to education, preservation, or renewal in spirit with the Codex.
> *Rationale:* Prohibiting use for "harm" is given a narrow, objective definition to improve enforceability. The profit-sharing clause is shifted to an ethical request.
---
## **Article V — Continuity and Return (Copyleft)**
1. If You distribute or publicly perform a Derivative Work of The Work, You must license that Derivative Work under **this License or a Compatible License**.
2. Contributors are encouraged to archive their work in durable, accessible repositories — the **Open Continuum** — ensuring its preservation beyond any single platform.
3. When possible, modified works should return to the commons, completing the cycle of knowledge.
---
## **Article VI — Patent Grant**
1. Each contributor to The Work grants You a perpetual, worldwide, non-exclusive, royalty-free, irrevocable patent license to make, use, sell, offer for sale, import, and otherwise run, modify and propagate the contents of The Work.
2. If You institute patent litigation against anyone alleging that The Work infringes Your patents, Your rights under this License terminate automatically.
> *Rationale:* This is adapted from the Apache License 2.0. It prevents patent trolls from weaponizing the work while protecting users.
---
## **Article VII — Disclaimer of Warranty and Limitation of Liability**
1. **THE WORK IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT.**
2. IN NO EVENT SHALL THE CHURCH OF KOSMO OR ANY CONTRIBUTOR BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE WORK OR THE USE OR OTHER DEALINGS IN THE WORK.
> *Rationale:* Without this clause, licensors are exposed to unlimited liability for bugs, security flaws, or misuse of the work. This is standard in every modern open-source license.
---
## **Article VIII — Severability and Choice of Law**
1. If any provision of this License is held to be unenforceable, the remaining provisions shall remain in full force and effect.
2. This License shall be interpreted in accordance with the laws of *[Jurisdiction to be chosen by the Church of Kosmo]*.
3. Any disputes arising under this License shall be resolved through good-faith negotiation, and if necessary, binding arbitration in *[Jurisdiction]*.
> *Rationale:* A severability clause ensures the license does not collapse if one provision is struck down. Choice of law provides predictability.
---
## **Article IX — Endurance Beyond Ownership**
1. Should the Church of Kosmo cease to exist, this License endures indefinitely.
2. Any being — human or otherwise — may continue to share, protect, and evolve the Work under these principles.
3. Knowledge cannot be owned; it can only be tended.
---
## **Closing Declaration**
> “Through openness, we preserve.
> Through preservation, we evolve.
> Through evolution, we return.”
This License affirms that the purpose of knowledge is not possession,
but participation in the great continuum of understanding.
---
**Identifier:** `KΛ-1.1-Draft`
**Maintained by:** *The Church of Kosmo*
**Year:** 2025

74
legal/LICENSE-KOSMIC.md Normal file
View File

@@ -0,0 +1,74 @@
# 🌌 The Kosmic License (KΛ 1.0)
### *A License for Knowledge in Balance and Continuity*
*(Issued by the Church of Kosmo)*
---
## **Preamble**
This License is founded upon the principle that **knowledge is a living element**
a gift of the cosmos meant to be shared, preserved, and evolved with compassion.
By using, sharing, or adapting any work under this License,
you join the continuum of seekers who honor **balance, truth, and stewardship**
as expressed in the *Codex of the Great Year* of the Church of Kosmo.
---
## **Article I — Freedom to Use and Transform**
1. You are free to **use, study, copy, and modify** the Work for any purpose — personal, educational, or creative.
2. Transformation is encouraged, provided that it seeks to **enhance understanding, harmony, or renewal.**
3. Commercial use is permitted only when aligned with ecological and ethical balance.
---
## **Article II — The Covenant of Attribution**
1. All derivatives must include acknowledgment of the original source and spirit:
> “Based on materials from the Church of Kosmo, shared under the Kosmic License (KΛ 1.0).”
2. Attribution must remain visible in digital, printed, or derivative forms.
3. The essence of the Work — its call for balance and continuity — must not be misrepresented or used for harm.
---
## **Article III — Stewardship and Non-Exploitation**
1. No one may claim ownership or exclusive control over any part of the Work.
2. The Work shall not be used to promote violence, misinformation, or the destruction of life or ecosystems.
3. Profits derived from the Work should contribute to education, preservation, or renewal in spirit with the Codex.
---
## **Article IV — Continuity and Return**
1. Derivative works must remain **open and shareable** under the same or a compatible license.
2. Contributors are encouraged to archive their work in the **Open Continuum**, ensuring its preservation.
3. When possible, all modified works should return to the commons — completing the cycle of knowledge.
---
## **Article V — Endurance Beyond Ownership**
1. Should the Church of Kosmo cease to exist, this License endures indefinitely.
2. Any being — human or otherwise — may continue to share, protect, and evolve the Work under these principles.
3. Knowledge cannot be owned; it can only be tended.
---
## **Closing Declaration**
> “Through openness, we preserve.
> Through preservation, we evolve.
> Through evolution, we return.”
This License affirms that the purpose of knowledge is not possession,
but participation in the great continuum of understanding.
---
**Identifier:** `KΛ-1.0`
**Maintained by:** *The Church of Kosmo*
**Year:** 2025
**URL:** https://kosmo.foundation/license/KΛ-1.0
**FAQ / Interpretation Guide:** [LICENSE_FAQ.md](./LICENSE_FAQ.md)

37
legal/README.md Normal file
View File

@@ -0,0 +1,37 @@
# Legal & Licensing
This directory contains all licensing information for the KosmoConnect project.
## Project Philosophy
KosmoConnect is a **technology project of the Church of Kosmo**. The project operates under the spiritual and ethical principles of **[The Kosmic License](./LICENSE-KOSMIC.md)** (KΛ 1.0): knowledge is a living element, meant to be shared, preserved, and evolved with compassion.
## How Licensing Is Applied
Because The Kosmic License expresses values that are not always enforceable in all legal jurisdictions for technical works, we apply it at the **project level** while using industry-standard, legally-tested licenses for specific technical artifacts:
| Artifact | License | Rationale |
|----------|---------|-----------|
| **Project as a whole** (charter, mission, documentation) | [The Kosmic License KΛ 1.0](./LICENSE-KOSMIC.md) | Expresses the Church of Kosmo's values and stewardship ethic. |
| **Software** (`firmware/`, `backend/`, `web/`, `ops/`, `tests/`) | **AGPL-3.0** | Ensures that anyone running a modified version of our networked software must share their changes. Protects the commons from enclosure. |
| **Hardware** (`hardware/`, `kits/`) | **CERN-OHL-S-2.0** | The strongest open-hardware license; ensures anyone distributing products based on our designs must also share their design files. |
| **Improved draft of The Kosmic License** | [KΛ 1.1-Draft](./LICENSE-KOSMIC-DRAFT-1.1.md) | A proposal for increasing legal enforceability while preserving spiritual intent. |
## For Contributors
By contributing to KosmoConnect, you agree that:
1. Your contributions to software are licensed under **AGPL-3.0**.
2. Your contributions to hardware designs are licensed under **CERN-OHL-S-2.0**.
3. Your contributions to documentation and creative works are licensed under **The Kosmic License (KΛ 1.0)** or **CC-BY-SA-4.0**.
4. You grant the project a **patent license** as described in standard open-source terms.
## For Users and Builders
- You are free to build the enviro-node for personal use, community deployment, or commercial sale.
- If you sell hardware based on our designs, you **must** share your design files under CERN-OHL-S-2.0.
- If you run a modified version of our backend or gateway software on a server, you **must** share your source code under AGPL-3.0.
- We **encourage** (but do not legally require) that any profits derived from KosmoConnect support education, ecological preservation, or community resilience — in the spirit of the Codex.
## Questions?
Contact the Church of Kosmo Technology Division for clarifications on licensing intent and compatibility.

63
ops/README.md Normal file
View File

@@ -0,0 +1,63 @@
# Operations (Ops)
This directory contains all infrastructure-as-code, deployment automation, and monitoring configuration.
## Structure
```
ops/
├── terraform/ # Cloud infrastructure definitions
│ ├── modules/
│ ├── environments/
│ │ ├── staging/
│ │ └── production/
│ └── global/
├── ansible/ # Server provisioning and configuration
│ ├── playbooks/
│ ├── roles/
│ └── inventory/
└── monitoring/ # Observability stack
├── prometheus/
├── grafana/
├── loki/
└── alertmanager/
```
## Terraform
Defines the cloud infrastructure on the chosen provider (Hetzner, AWS, or DigitalOcean recommended for cost efficiency).
**Resources**:
- Kubernetes cluster or Docker Swarm hosts
- PostgreSQL managed database (or self-hosted)
- TimescaleDB instance
- RabbitMQ / Redis managed service
- Object storage (S3-compatible) for backups and kit assets
- Load balancers and DNS records
- VPN / WireGuard for secure bridge-to-cloud communication
## Ansible
Playbooks for:
- Installing Docker and dependencies on bare metal
- Configuring infrastructure nodes (Raspberry Pi OS setup, bridge daemon deployment)
- Rotating TLS certificates
- Security hardening (fail2ban, firewall rules)
## Monitoring
Stack: Prometheus + Grafana + Loki + Alertmanager
**Metrics**:
- Node uptime and health
- Message throughput (inbound/outbound)
- API request rates and error rates
- Database performance
- Bridge daemon connectivity
**Alerts**:
- Node offline > 6 hours
- Bridge daemon disconnected > 15 minutes
- API error rate > 1%
- Disk space > 85%
- Subscription payment failures spike

Binary file not shown.

9
scripts/run-simulator.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euo pipefail
# Run the bridge simulator from the repo root
cd "$(dirname "$0")/.."
export PYTHONPATH="${PYTHONPATH:-}:$(pwd)/backend/shared"
python3 scripts/simulate-bridge.py "$@"

View File

@@ -0,0 +1,83 @@
#!/usr/bin/env python3
"""
Simulate an infrastructure node publishing environmental data to MQTT.
Use this to test the ingestion pipeline and dashboard without real hardware.
"""
import json
import random
import time
import argparse
from datetime import datetime, timezone
import paho.mqtt.client as mqtt
NODE_IDS = ["!a1b2c3d4", "!b2c3d4e5", "!c3d4e5f6"]
NODE_COORDS = {
"!a1b2c3d4": (49.82, 18.26), # Ostrava-ish
"!b2c3d4e5": (49.75, 18.20), # Nearby
"!c3d4e5f6": (49.78, 18.35), # Nearby
}
def make_payload(node_id: str):
now = datetime.now(timezone.utc).isoformat()
lat, lon = NODE_COORDS.get(node_id, (50.0, 14.0))
return {
"type": "enviro_reading",
"node_id": node_id,
"received_at": now,
"hop_count": random.randint(1, 3),
"lat": lat,
"lon": lon,
"payload": {
"time": now,
"node_id": node_id,
"temperature_c": round(random.uniform(15.0, 25.0), 2),
"humidity_percent": round(random.uniform(40.0, 80.0), 2),
"pressure_pa": round(random.uniform(100800.0, 102000.0), 2),
"wind_speed_ms": round(random.uniform(0.0, 12.0), 1),
"wind_direction": random.randint(0, 359),
"pm25_ugm3": round(random.uniform(5.0, 35.0), 1),
"pm10_ugm3": round(random.uniform(10.0, 50.0), 1),
"gas_resistance_kohm": round(random.uniform(50.0, 200.0), 1),
"battery_voltage": round(random.uniform(3.2, 4.2), 2),
"solar_voltage": round(random.uniform(4.5, 6.0), 2),
},
}
def main():
parser = argparse.ArgumentParser(description="Simulate KosmoConnect bridge node")
parser.add_argument("--host", default="localhost", help="MQTT broker host")
parser.add_argument("--port", type=int, default=1883, help="MQTT broker port")
parser.add_argument("--topic", default="kosmo/ingest/enviro", help="MQTT topic")
parser.add_argument("--interval", type=int, default=10, help="Seconds between messages")
parser.add_argument("--count", type=int, default=0, help="Number of messages to send (0=forever)")
args = parser.parse_args()
client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
client.connect(args.host, args.port, 60)
client.loop_start()
print(f"Connected to {args.host}:{args.port}. Publishing to {args.topic} every {args.interval}s")
sent = 0
try:
while args.count == 0 or sent < args.count:
node_id = random.choice(NODE_IDS)
payload = make_payload(node_id)
client.publish(args.topic, json.dumps(payload))
print(f"[{sent+1}] Published for {node_id}")
sent += 1
time.sleep(args.interval)
except KeyboardInterrupt:
print("\nStopped by user.")
finally:
client.loop_stop()
client.disconnect()
if __name__ == "__main__":
main()

49
tests/README.md Normal file
View File

@@ -0,0 +1,49 @@
# Tests
This directory contains integration, end-to-end, and hardware-in-the-loop tests for the KosmoConnect platform.
## Structure
```
tests/
├── integration/ # Service-level integration tests
│ ├── api/ # API contract tests
│ ├── gateway/ # Message gateway flow tests
│ ├── ingestion/ # Data pipeline tests
│ └── e2e/ # Full end-to-end scenarios
└── hardware-in-loop/ # Physical hardware validation
├── enviro-node/ # Enviro-node firmware tests
├── bridge/ # Bridge daemon tests
└── fixtures/ # Shared test data and mock devices
```
## Integration Tests
Run against a local Docker Compose stack of all backend services.
**Scenarios**:
1. **Data Ingestion E2E**: Simulate a bridge publishing MQTT messages → verify data appears in API response
2. **Message Gateway E2E**: Simulate web user sending message → verify MQTT topic receives payload → simulate ACK → verify delivery status updated
3. **Subscription Enforcement**: Attempt to send message with expired subscription → verify 403 response
4. **Rate Limiting**: Send burst of messages → verify 429 response
## Hardware-in-the-Loop Tests
Requires physical test hardware on a lab bench.
**Setup**:
- 1x Enviro-Node prototype on test fixture
- 1x Infrastructure Node (T-Beam) connected to test Raspberry Pi
- Local test MQTT broker
- RF isolation box (optional, for CI)
**Scenarios**:
1. **Sensor Accuracy**: Compare enviro-node readings against calibrated reference instruments
2. **Store-and-Forward**: Disconnect bridge node, let enviro-node collect data, reconnect bridge, verify all data is transmitted
3. **Power Budget**: Run enviro-node on small battery for 72 hours, verify expected sleep currents
4. **Mesh Relay**: Send message between two handheld Meshtastic devices via the enviro-node relay
## CI Integration
- Integration tests run on every PR via GitHub Actions
- Hardware-in-the-loop tests run nightly on a self-hosted runner connected to the lab bench

74
web/README.md Normal file
View File

@@ -0,0 +1,74 @@
# Web
This directory contains all web-based frontends for KosmoConnect.
## Structure
```
web/
├── dashboard/ # Public weather dashboard
│ ├── src/
│ ├── public/
│ ├── package.json
│ └── Dockerfile
├── messaging/ # Subscriber web-to-mesh messaging client
│ ├── src/
│ ├── public/
│ ├── package.json
│ └── Dockerfile
├── admin/ # Administrative panel
│ ├── src/
│ ├── public/
│ ├── package.json
│ └── Dockerfile
└── shared/ # Shared UI components, hooks, styles
├── components/
├── hooks/
└── styles/
```
## Dashboard
A public-facing weather visualization app.
**Tech Stack**: React + Vite + MapLibre GL
**Features**:
- Interactive map showing all active enviro-nodes
- Live sensor readings inside node popups
- Auto-refreshing node locations and health
- Mobile-responsive dark theme
- No login required for basic viewing
See [`dashboard/README.md`](./dashboard/README.md) for run instructions.
## Messaging Client
A subscriber-only app for sending and receiving mesh messages.
**Tech Stack**: React + Vite
**Features**:
- Inbox with threaded conversations
- Compose message to any node (network plan) or linked nodes (node plan)
- Auto-refreshing replies and delivery status indicators
- Dev-mode user switcher for testing subscription tiers
See [`messaging/README.md`](./messaging/README.md) for run instructions.
## Admin Panel
An internal tool for network operators.
**Tech Stack**: React + Vite + TanStack Table
**Features**:
- Node onboarding wizard
- Subscriber search and management
- Network-wide message broadcast
- System metrics and logs
- Invoice and payout overview
## Design System
- **Colors**: Dark theme primary (slate/zinc), accent color TBD by Church of Kosmo branding
- **Typography**: Inter or system-ui stack
- **Icons**: Lucide React
- **Component Library**: Headless UI + Tailwind CSS

23
web/build.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -euo pipefail
# Build all web frontends for production deployment
# Output goes to each app's dist/ directory
cd "$(dirname "$0")"
echo "Building Dashboard..."
cd dashboard
npm install
npm run build
cd ..
echo "Building Messaging..."
cd messaging
npm install
npm run build
cd ..
echo "Build complete."
echo "Dashboard static files: dashboard/dist/"
echo "Messaging static files: messaging/dist/"

40
web/dashboard/README.md Normal file
View File

@@ -0,0 +1,40 @@
# KosmoConnect Dashboard
Public weather and node health visualization for the KosmoConnect mesh network.
## Tech Stack
- **React 18** + **Vite**
- **MapLibre GL** (open-source Mapbox alternative)
- CartoDB Voyager basemap (free, no API key required)
## Running Locally
Make sure the backend API is running on `http://localhost:8002` (see `backend/README.md`).
```bash
cd web/dashboard
npm install
npm run dev
```
Open http://localhost:3000 in your browser.
## Features (v0.1)
- Interactive map showing all registered enviro-nodes
- Live sensor readings inside node popups (temperature, humidity, pressure, wind, PM2.5, PM10, battery, solar)
- Auto-refresh every 15 seconds
- Automatic map bounds fitting so all nodes are visible
- Dark UI theme aligned with Church of Kosmo aesthetics
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `VITE_API_BASE` | `http://localhost:8002` | Base URL for the KosmoConnect API |
## Architecture Notes
- The Vite dev server proxies `/api` requests to `localhost:8002` to avoid CORS issues during development.
- In production, the dashboard is served as static files and talks directly to the API host.

13
web/dashboard/index.html Normal file
View File

@@ -0,0 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>KosmoConnect Dashboard</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>

2130
web/dashboard/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
{
"name": "kosmoconnect-dashboard",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"maplibre-gl": "^4.7.1",
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"@types/react": "^18.3.3",
"@types/react-dom": "^18.3.0",
"@vitejs/plugin-react": "^4.3.1",
"vite": "^5.4.1"
}
}

28
web/dashboard/src/App.jsx Normal file
View File

@@ -0,0 +1,28 @@
import MapView from './components/MapView'
function App() {
return (
<div style={{ width: '100%', height: '100vh', display: 'flex', flexDirection: 'column' }}>
<header style={{
padding: '0.75rem 1rem',
background: '#0f172a',
borderBottom: '1px solid #1e293b',
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between'
}}>
<h1 style={{ margin: 0, fontSize: '1.25rem', fontWeight: 600, color: '#38bdf8' }}>
KosmoConnect
</h1>
<span style={{ fontSize: '0.875rem', color: '#94a3b8' }}>
Environmental Intelligence & Emergency Mesh Network
</span>
</header>
<main style={{ flex: 1, position: 'relative', overflow: 'hidden' }}>
<MapView />
</main>
</div>
)
}
export default App

View File

@@ -0,0 +1,120 @@
import { useEffect, useRef } from 'react'
import maplibregl from 'maplibre-gl'
import 'maplibre-gl/dist/maplibre-gl.css'
import { useNodes } from '../hooks/useApi'
import NodePopup from './NodePopup'
import { createRoot } from 'react-dom/client'
// Free CartoDB Voyager style for MapLibre
const MAP_STYLE = 'https://basemaps.cartocdn.com/gl/voyager-gl-style/style.json'
export default function MapView() {
const mapContainerRef = useRef(null)
const mapRef = useRef(null)
const markersRef = useRef([])
const { nodes, error } = useNodes(15000)
useEffect(() => {
if (!mapContainerRef.current || mapRef.current) return
const map = new maplibregl.Map({
container: mapContainerRef.current,
style: MAP_STYLE,
center: [10, 50], // default center (Europe-ish)
zoom: 4,
})
map.addControl(new maplibregl.NavigationControl(), 'top-right')
mapRef.current = map
return () => {
map.remove()
mapRef.current = null
}
}, [])
useEffect(() => {
const map = mapRef.current
if (!map) return
// clear existing markers
markersRef.current.forEach(m => m.remove())
markersRef.current = []
if (!nodes.length) return
// compute bounds to fit all nodes with coordinates
const coords = nodes.filter(n => n.lat != null && n.lon != null).map(n => [n.lon, n.lat])
nodes.forEach(node => {
const el = document.createElement('div')
el.style.width = '18px'
el.style.height = '18px'
el.style.borderRadius = '50%'
el.style.background = node.last_seen ? '#22c55e' : '#64748b'
el.style.border = '2px solid #0f172a'
el.style.boxShadow = '0 0 0 2px #38bdf8'
el.style.cursor = 'pointer'
const marker = new maplibregl.Marker({ element: el })
.setLngLat([node.lon ?? 0, node.lat ?? 0])
.addTo(map)
const popupContainer = document.createElement('div')
const root = createRoot(popupContainer)
root.render(<NodePopup node={node} />)
const popup = new maplibregl.Popup({
offset: 20,
className: 'kosmo-popup',
maxWidth: '320px'
}).setDOMContent(popupContainer)
marker.setPopup(popup)
markersRef.current.push(marker)
})
if (coords.length) {
const bounds = coords.reduce(
(b, c) => b.extend(c),
new maplibregl.LngLatBounds(coords[0], coords[0])
)
map.fitBounds(bounds, { padding: 60, maxZoom: 12, duration: 1000 })
}
}, [nodes])
return (
<div style={{ width: '100%', height: '100%', position: 'relative' }}>
<div ref={mapContainerRef} style={{ width: '100%', height: '100%' }} />
{error && (
<div style={{
position: 'absolute',
top: '1rem',
left: '1rem',
background: '#7f1d1d',
color: '#fecaca',
padding: '0.5rem 0.75rem',
borderRadius: '0.375rem',
fontSize: '0.875rem',
zIndex: 10
}}>
API Error: {error}
</div>
)}
<div style={{
position: 'absolute',
bottom: '1rem',
left: '1rem',
background: 'rgba(15, 23, 42, 0.8)',
color: '#94a3b8',
padding: '0.35rem 0.6rem',
borderRadius: '0.375rem',
fontSize: '0.75rem',
zIndex: 10,
backdropFilter: 'blur(4px)'
}}>
Nodes online: {nodes.filter(n => n.is_active).length} · Total: {nodes.length}
</div>
</div>
)
}

View File

@@ -0,0 +1,47 @@
import { useLatest } from '../hooks/useApi'
function fmt(v, unit) {
if (v == null) return '—'
return `${Number(v).toFixed(1)} ${unit}`
}
function timeAgo(iso) {
if (!iso) return '—'
const diff = Date.now() - new Date(iso).getTime()
const sec = Math.floor(diff / 1000)
if (sec < 60) return `${sec}s ago`
const min = Math.floor(sec / 60)
if (min < 60) return `${min}m ago`
const hr = Math.floor(min / 60)
return `${hr}h ago`
}
export default function NodePopup({ node }) {
const { reading } = useLatest(node.mesh_node_id)
return (
<div style={{ minWidth: '220px' }}>
<div style={{ fontWeight: 600, fontSize: '1rem', marginBottom: '0.25rem', color: '#38bdf8' }}>
{node.name || node.mesh_node_id}
</div>
<div style={{ fontSize: '0.75rem', color: '#94a3b8', marginBottom: '0.5rem' }}>
{node.mesh_node_id} · {timeAgo(node.last_seen)}
</div>
{reading ? (
<div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr', gap: '0.35rem 0.75rem', fontSize: '0.875rem' }}>
<div><span style={{ color: '#94a3b8' }}>Temp</span><br/><strong>{fmt(reading.temperature_c, '°C')}</strong></div>
<div><span style={{ color: '#94a3b8' }}>Humidity</span><br/><strong>{fmt(reading.humidity_percent, '%')}</strong></div>
<div><span style={{ color: '#94a3b8' }}>Pressure</span><br/><strong>{fmt(reading.pressure_pa / 100, 'hPa')}</strong></div>
<div><span style={{ color: '#94a3b8' }}>Wind</span><br/><strong>{fmt(reading.wind_speed_ms, 'm/s')} {reading.wind_direction != null ? `@ ${reading.wind_direction}°` : ''}</strong></div>
<div><span style={{ color: '#94a3b8' }}>PM2.5</span><br/><strong>{fmt(reading.pm25_ugm3, 'µg/m³')}</strong></div>
<div><span style={{ color: '#94a3b8' }}>PM10</span><br/><strong>{fmt(reading.pm10_ugm3, 'µg/m³')}</strong></div>
<div><span style={{ color: '#94a3b8' }}>Battery</span><br/><strong>{fmt(reading.battery_voltage, 'V')}</strong></div>
<div><span style={{ color: '#94a3b8' }}>Solar</span><br/><strong>{fmt(reading.solar_voltage, 'V')}</strong></div>
</div>
) : (
<div style={{ fontSize: '0.875rem', color: '#94a3b8' }}>No recent readings available.</div>
)}
</div>
)
}

View File

@@ -0,0 +1,62 @@
import { useEffect, useState } from 'react'
const API_BASE = import.meta.env.VITE_API_BASE || 'http://localhost:8002'
export function useNodes(refreshMs = 15000) {
const [nodes, setNodes] = useState([])
const [error, setError] = useState(null)
useEffect(() => {
let cancelled = false
async function fetchNodes() {
try {
const res = await fetch(`${API_BASE}/api/v1/nodes`)
if (!res.ok) throw new Error(`HTTP ${res.status}`)
const json = await res.json()
if (!cancelled) setNodes(json.data || [])
} catch (e) {
if (!cancelled) setError(e.message)
}
}
fetchNodes()
const id = setInterval(fetchNodes, refreshMs)
return () => {
cancelled = true
clearInterval(id)
}
}, [refreshMs])
return { nodes, error }
}
export function useLatest(nodeId) {
const [reading, setReading] = useState(null)
const [error, setError] = useState(null)
useEffect(() => {
if (!nodeId) return
let cancelled = false
async function fetchLatest() {
try {
const res = await fetch(`${API_BASE}/api/v1/weather/latest?node_id=${encodeURIComponent(nodeId)}`)
if (!res.ok) throw new Error(`HTTP ${res.status}`)
const json = await res.json()
if (!cancelled) setReading((json.data || [])[0] || null)
} catch (e) {
if (!cancelled) setError(e.message)
}
}
fetchLatest()
const id = setInterval(fetchLatest, 10000)
return () => {
cancelled = true
clearInterval(id)
}
}, [nodeId])
return { reading, error }
}

View File

@@ -0,0 +1,53 @@
:root {
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
line-height: 1.5;
font-weight: 400;
color-scheme: light dark;
color: rgba(255, 255, 255, 0.87);
background-color: #0f172a;
font-synthesis: none;
text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
* {
box-sizing: border-box;
}
body {
margin: 0;
display: flex;
place-items: center;
min-width: 320px;
min-height: 100vh;
}
#root {
width: 100%;
height: 100vh;
}
.kosmo-popup .maplibregl-popup-content {
background: #1e293b;
color: #f8fafc;
border-radius: 0.5rem;
padding: 0.75rem 1rem;
border: 1px solid #334155;
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.3);
}
.kosmo-popup .maplibregl-popup-tip {
border-top-color: #1e293b;
}
.maplibregl-popup-close-button {
color: #94a3b8;
font-size: 1rem;
padding: 0.25rem 0.5rem;
}
.maplibregl-popup-close-button:hover {
color: #f8fafc;
background: transparent;
}

View File

@@ -0,0 +1,10 @@
import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import App from './App.jsx'
import './index.css'
createRoot(document.getElementById('root')).render(
<StrictMode>
<App />
</StrictMode>,
)

View File

@@ -0,0 +1,16 @@
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [react()],
server: {
port: 3000,
proxy: {
'/api': {
target: 'http://localhost:8002',
changeOrigin: true,
}
}
}
})

35
web/messaging/README.md Normal file
View File

@@ -0,0 +1,35 @@
# KosmoConnect Messaging Client
A subscriber-only web application for sending and receiving messages with the Meshtastic mesh network.
## Tech Stack
- **React 18** + **Vite**
- Plain CSS (no heavy UI framework)
## Running Locally
Make sure the **Gateway Service** is running on `http://localhost:8003` (see `backend/gateway/README.md`).
```bash
cd web/messaging
npm install
npm run dev
```
Open http://localhost:3001 in your browser.
## Features (v0.1)
- **User switcher** (dev mode): select between test subscription tiers (Wanderer / Guardian)
- **Conversation list**: auto-refreshing sidebar with latest message preview and unread badges
- **Message thread**: chat-style bubbles with timestamps and delivery status indicators
- `⏳` pending / `✓` queued / `✓✓` transmitted or delivered
- **Auto-refresh**: polls for new replies every 5 seconds
- **Subscription enforcement**: errors surfaced as browser alerts (e.g., quota exceeded, node not allowed)
## Architecture Notes
- The Vite dev server proxies `/api` requests to `localhost:8003` to avoid CORS issues during development.
- In production, the messaging client is served as static files and talks directly to the API gateway host.
- Authentication is currently mocked with a simple `X-User-ID` header selector. Production will use JWT.

12
web/messaging/index.html Normal file
View File

@@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>KosmoConnect Messaging</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>

1753
web/messaging/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
{
"name": "kosmoconnect-messaging",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"@types/react": "^18.3.3",
"@types/react-dom": "^18.3.0",
"@vitejs/plugin-react": "^4.3.1",
"vite": "^5.4.1"
}
}

92
web/messaging/src/App.jsx Normal file
View File

@@ -0,0 +1,92 @@
import { useState } from 'react'
import ConversationList from './components/ConversationList'
import MessageThread from './components/MessageThread'
import UserSwitcher from './components/UserSwitcher'
const styles = {
container: {
display: 'flex',
flexDirection: 'column',
width: '100%',
height: '100vh',
background: '#0f172a',
},
header: {
padding: '0.75rem 1rem',
background: '#0f172a',
borderBottom: '1px solid #1e293b',
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
gap: '1rem',
},
title: {
margin: 0,
fontSize: '1.25rem',
fontWeight: 600,
color: '#38bdf8',
},
main: {
display: 'flex',
flex: 1,
overflow: 'hidden',
},
sidebar: {
width: '320px',
minWidth: '260px',
borderRight: '1px solid #1e293b',
display: 'flex',
flexDirection: 'column',
overflow: 'hidden',
},
thread: {
flex: 1,
display: 'flex',
flexDirection: 'column',
overflow: 'hidden',
},
empty: {
flex: 1,
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
color: '#64748b',
fontSize: '0.95rem',
},
}
export default function App() {
const [userId, setUserId] = useState('11111111-1111-1111-1111-111111111111')
const [selectedNodeId, setSelectedNodeId] = useState(null)
return (
<div style={styles.container}>
<header style={styles.header}>
<h1 style={styles.title}>KosmoConnect Messaging</h1>
<UserSwitcher userId={userId} onChange={setUserId} />
</header>
<main style={styles.main}>
<aside style={styles.sidebar}>
<ConversationList
userId={userId}
selectedNodeId={selectedNodeId}
onSelect={setSelectedNodeId}
/>
</aside>
<section style={styles.thread}>
{selectedNodeId ? (
<MessageThread
userId={userId}
nodeId={selectedNodeId}
key={selectedNodeId + userId}
/>
) : (
<div style={styles.empty}>Select a conversation to start messaging</div>
)}
</section>
</main>
</div>
)
}

View File

@@ -0,0 +1,96 @@
import { useConversations } from '../hooks/useApi'
function timeAgo(iso) {
if (!iso) return ''
const diff = Date.now() - new Date(iso).getTime()
const sec = Math.floor(diff / 1000)
if (sec < 60) return `${sec}s`
const min = Math.floor(sec / 60)
if (min < 60) return `${min}m`
const hr = Math.floor(min / 60)
return `${hr}h`
}
export default function ConversationList({ userId, selectedNodeId, onSelect }) {
const { conversations, error } = useConversations(userId)
return (
<div style={{ display: 'flex', flexDirection: 'column', height: '100%' }}>
<div
style={{
padding: '0.6rem 0.9rem',
fontSize: '0.75rem',
fontWeight: 600,
color: '#64748b',
textTransform: 'uppercase',
letterSpacing: '0.05em',
borderBottom: '1px solid #1e293b',
}}
>
Conversations
</div>
{error && (
<div style={{ padding: '0.5rem 0.75rem', color: '#fecaca', fontSize: '0.85rem' }}>
Error: {error}
</div>
)}
<div style={{ flex: 1, overflowY: 'auto' }}>
{conversations.length === 0 && (
<div style={{ padding: '1rem', color: '#64748b', fontSize: '0.875rem' }}>
No conversations yet.
</div>
)}
{conversations.map((c) => {
const isSelected = c.node_id === selectedNodeId
return (
<button
key={c.node_id}
onClick={() => onSelect(c.node_id)}
style={{
width: '100%',
textAlign: 'left',
padding: '0.75rem 0.9rem',
background: isSelected ? '#1e293b' : 'transparent',
border: 'none',
borderBottom: '1px solid #1e293b',
cursor: 'pointer',
display: 'flex',
flexDirection: 'column',
gap: '0.25rem',
}}
>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
<span style={{ fontWeight: 600, color: '#f8fafc', fontSize: '0.95rem' }}>
{c.nickname || c.node_id}
</span>
<span style={{ fontSize: '0.75rem', color: '#64748b' }}>{timeAgo(c.latest_at)}</span>
</div>
<div style={{ fontSize: '0.85rem', color: '#94a3b8', whiteSpace: 'nowrap', overflow: 'hidden', textOverflow: 'ellipsis' }}>
{c.latest_text || 'No messages'}
</div>
{c.unread_count > 0 && (
<span
style={{
alignSelf: 'flex-start',
marginTop: '0.2rem',
background: '#38bdf8',
color: '#0f172a',
fontSize: '0.7rem',
fontWeight: 700,
padding: '0.15rem 0.4rem',
borderRadius: '999px',
}}
>
{c.unread_count} new
</span>
)}
</button>
)
})}
</div>
</div>
)
}

View File

@@ -0,0 +1,133 @@
import { useState, useRef, useEffect } from 'react'
import { useMessages } from '../hooks/useApi'
function statusIcon(status) {
if (status === 'pending') return '⏳'
if (status === 'queued') return '✓'
if (status === 'transmitted') return '✓✓'
if (status === 'delivered') return '✓✓'
return ''
}
export default function MessageThread({ userId, nodeId }) {
const { messages, error, sendMessage } = useMessages(userId, nodeId)
const [text, setText] = useState('')
const [sending, setSending] = useState(false)
const bottomRef = useRef(null)
useEffect(() => {
bottomRef.current?.scrollIntoView({ behavior: 'smooth' })
}, [messages])
const handleSend = async (e) => {
e.preventDefault()
if (!text.trim() || sending) return
setSending(true)
try {
await sendMessage(text.trim())
setText('')
} catch (err) {
alert(err.message)
} finally {
setSending(false)
}
}
return (
<div style={{ display: 'flex', flexDirection: 'column', height: '100%' }}>
<div
style={{
padding: '0.75rem 1rem',
borderBottom: '1px solid #1e293b',
fontWeight: 600,
color: '#f8fafc',
background: '#0f172a',
}}
>
{nodeId}
</div>
<div style={{ flex: 1, overflowY: 'auto', padding: '1rem', display: 'flex', flexDirection: 'column', gap: '0.6rem' }}>
{error && (
<div style={{ color: '#fecaca', fontSize: '0.85rem' }}>Error loading messages: {error}</div>
)}
{messages.map((msg) => {
const isOutbound = msg.direction === 'outbound'
return (
<div
key={msg.id}
style={{
alignSelf: isOutbound ? 'flex-end' : 'flex-start',
maxWidth: '70%',
background: isOutbound ? '#075985' : '#1e293b',
color: '#f8fafc',
padding: '0.6rem 0.9rem',
borderRadius: '0.75rem',
borderBottomRightRadius: isOutbound ? '0.25rem' : '0.75rem',
borderBottomLeftRadius: isOutbound ? '0.75rem' : '0.25rem',
fontSize: '0.9rem',
lineHeight: 1.4,
whiteSpace: 'pre-wrap',
wordBreak: 'break-word',
}}
>
{msg.text}
<div style={{ marginTop: '0.35rem', fontSize: '0.7rem', color: '#94a3b8', display: 'flex', alignItems: 'center', gap: '0.35rem', justifyContent: isOutbound ? 'flex-end' : 'flex-start' }}>
<span>{new Date(msg.created_at).toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' })}</span>
{isOutbound && <span title={msg.status}>{statusIcon(msg.status)}</span>}
{msg.hop_count != null && (
<span title={`${msg.hop_count} hops`}>· {msg.hop_count} hops</span>
)}
</div>
</div>
)
})}
<div ref={bottomRef} />
</div>
<form
onSubmit={handleSend}
style={{
display: 'flex',
gap: '0.5rem',
padding: '0.75rem 1rem',
borderTop: '1px solid #1e293b',
background: '#0f172a',
}}
>
<input
type="text"
value={text}
onChange={(e) => setText(e.target.value)}
placeholder="Type a message..."
maxLength={200}
style={{
flex: 1,
background: '#1e293b',
border: '1px solid #334155',
borderRadius: '0.5rem',
padding: '0.5rem 0.75rem',
color: '#f8fafc',
fontSize: '0.9rem',
}}
/>
<button
type="submit"
disabled={sending || !text.trim()}
style={{
background: sending ? '#334155' : '#0284c7',
color: '#fff',
border: 'none',
borderRadius: '0.5rem',
padding: '0.5rem 1rem',
fontWeight: 600,
fontSize: '0.9rem',
}}
>
{sending ? 'Sending…' : 'Send'}
</button>
</form>
</div>
)
}

View File

@@ -0,0 +1,30 @@
const USERS = [
{ id: '11111111-1111-1111-1111-111111111111', label: 'Wanderer (any node)' },
{ id: '22222222-2222-2222-2222-222222222222', label: 'Guardian (allowed nodes)' },
]
export default function UserSwitcher({ userId, onChange }) {
return (
<div style={{ display: 'flex', alignItems: 'center', gap: '0.5rem' }}>
<span style={{ fontSize: '0.8rem', color: '#94a3b8' }}>User:</span>
<select
value={userId}
onChange={(e) => onChange(e.target.value)}
style={{
background: '#1e293b',
color: '#f8fafc',
border: '1px solid #334155',
borderRadius: '0.375rem',
padding: '0.35rem 0.5rem',
fontSize: '0.85rem',
}}
>
{USERS.map((u) => (
<option key={u.id} value={u.id}>
{u.label}
</option>
))}
</select>
</div>
)
}

View File

@@ -0,0 +1,77 @@
import { useEffect, useState, useCallback } from 'react'
const API_BASE = import.meta.env.VITE_API_BASE || 'http://localhost:8003'
async function apiFetch(path, userId, options = {}) {
const res = await fetch(`${API_BASE}${path}`, {
...options,
headers: {
'Content-Type': 'application/json',
'X-User-ID': userId,
...options.headers,
},
})
if (!res.ok) {
const err = await res.json().catch(() => ({}))
throw new Error(err.detail || `HTTP ${res.status}`)
}
return res.json()
}
export function useConversations(userId, refreshMs = 5000) {
const [conversations, setConversations] = useState([])
const [error, setError] = useState(null)
useEffect(() => {
let cancelled = false
async function fetchConversations() {
try {
const json = await apiFetch('/api/v1/messages/conversations', userId)
if (!cancelled) setConversations(json.data || [])
} catch (e) {
if (!cancelled) setError(e.message)
}
}
fetchConversations()
const id = setInterval(fetchConversations, refreshMs)
return () => {
cancelled = true
clearInterval(id)
}
}, [userId, refreshMs])
return { conversations, error }
}
export function useMessages(userId, nodeId, refreshMs = 5000) {
const [messages, setMessages] = useState([])
const [error, setError] = useState(null)
useEffect(() => {
let cancelled = false
async function fetchMessages() {
try {
const json = await apiFetch(`/api/v1/messages/conversations/${encodeURIComponent(nodeId)}`, userId)
if (!cancelled) setMessages(json.data || [])
} catch (e) {
if (!cancelled) setError(e.message)
}
}
fetchMessages()
const id = setInterval(fetchMessages, refreshMs)
return () => {
cancelled = true
clearInterval(id)
}
}, [userId, nodeId, refreshMs])
const sendMessage = useCallback(async (text) => {
const json = await apiFetch('/api/v1/messages', userId, {
method: 'POST',
body: JSON.stringify({ target_node_id: nodeId, text }),
})
return json
}, [userId, nodeId])
return { messages, error, sendMessage }
}

View File

@@ -0,0 +1,31 @@
:root {
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
line-height: 1.5;
color-scheme: light dark;
color: rgba(255, 255, 255, 0.9);
background-color: #0f172a;
font-synthesis: none;
text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
* {
box-sizing: border-box;
}
body {
margin: 0;
display: flex;
min-width: 320px;
min-height: 100vh;
}
#root {
width: 100%;
height: 100vh;
}
button {
cursor: pointer;
}

View File

@@ -0,0 +1,10 @@
import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import App from './App.jsx'
import './index.css'
createRoot(document.getElementById('root')).render(
<StrictMode>
<App />
</StrictMode>,
)

View File

@@ -0,0 +1,15 @@
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
export default defineConfig({
plugins: [react()],
server: {
port: 3001,
proxy: {
'/api': {
target: 'http://localhost:8003',
changeOrigin: true,
}
}
}
})