Adding GPU support for transcription
This commit is contained in:
20
.env.example
20
.env.example
@@ -35,6 +35,26 @@ OPENAI_API_KEY=
|
||||
# OPENAI_TRANSCRIBE_MODEL=whisper-1
|
||||
# OPENAI_TRANSCRIBE_TIMEOUT=600
|
||||
|
||||
# Local Whisper settings
|
||||
# Choose CPU explicitly unless you have a working GPU runtime in Docker
|
||||
WHISPER_DEVICE=cpu
|
||||
# Model and precision (large-v3 int8 is accurate but heavy; consider medium/small for speed)
|
||||
WHISPER_MODEL=large-v3
|
||||
WHISPER_PRECISION=int8
|
||||
# Threads for CPU inference
|
||||
WHISPER_CPU_THREADS=4
|
||||
|
||||
# --- GPU (CUDA) optional setup ---
|
||||
# To enable NVIDIA GPU acceleration:
|
||||
# 1) Install NVIDIA driver on the host and the NVIDIA Container Toolkit
|
||||
# 2) Set the Docker runtime to NVIDIA for the worker containers
|
||||
# DOCKER_GPU_RUNTIME=nvidia
|
||||
# 3) Ensure GPU visibility (default is all)
|
||||
# NVIDIA_VISIBLE_DEVICES=all
|
||||
# 4) Use GPU-friendly precision and device
|
||||
# WHISPER_DEVICE=cuda
|
||||
# WHISPER_PRECISION=float16
|
||||
|
||||
# Docker volumes paths
|
||||
LIBRARY_HOST_DIR=/mnt/nfs/library
|
||||
TRANSCRIPTS_HOST_DIR=/mnt/nfs/transcripts
|
||||
|
Reference in New Issue
Block a user