Introducing OpenAI offloaded transcription
This commit is contained in:
@@ -33,6 +33,9 @@ Note: `.env.example` includes placeholders for both **Meili** and **OpenWebUI**
|
||||
- `WHISPER_MODEL`: Whisper model variant to use for transcription (e.g., `small`, `medium`, `large`).
|
||||
- `WHISPER_PRECISION`: Precision setting for Whisper inference (`float32` or `float16`).
|
||||
- `WHISPER_LANGUAGE`: Language code for Whisper to use during transcription (e.g., `en` for English).
|
||||
- `TRANSCRIBE_BACKEND` (default `local`): Set to `openai` to offload Whisper transcription to the OpenAI API instead of running locally.
|
||||
- `OPENAI_API_KEY`: Required when `TRANSCRIBE_BACKEND=openai`; API key used for authenticated requests.
|
||||
- `OPENAI_BASE_URL`, `OPENAI_TRANSCRIBE_MODEL`, `OPENAI_TRANSCRIBE_TIMEOUT`: Optional overrides for the OpenAI transcription endpoint, model and request timeout.
|
||||
- `YTDLP_COOKIES`: Path to YouTube-DL cookies file for accessing age-restricted or private videos.
|
||||
- `OPENWEBUI_URL`: Base URL of the OpenWebUI API (default depends on platform).
|
||||
- `OPENWEBUI_API_KEY`: API key for authenticating PodX workers with OpenWebUI.
|
||||
|
Reference in New Issue
Block a user