runpod-scheduler / README.md
RunPod Scheduler
Automated RunPod scheduler that manages GPU pods for Siraaj instances.
Last updated: 4/16/2026GitHub
RunPod Scheduler
Automated scheduler for RunPod GPU instances with Siraaj API integration. Creates/terminates pods on schedule and updates Siraaj instances with dynamic RunPod URLs.
Problem Solved
RunPod assigns new IPs/ports daily (e.g., 213.173.105.7:30205). This service automatically updates your Siraaj instances via REST API when pods get new addresses, eliminating manual configuration updates.
Key Features
- ✅ Automated Pod Management: Schedule daily start/stop to save costs
- ✅ Dynamic IP/Port Handling: Automatically extracts new RunPod endpoints using GraphQL
- ✅ Siraaj API Integration: Updates LLM providers and AI Meet configs via REST API
- ✅ Safe Pod Termination: Only terminates your configured pods
- ✅ Configuration Persistence:
--stoppreserves Siraaj configs for next--start
Quick Start
# Install dependencies
uv sync
# Configure environment
cp .env.example .env
# Edit .env with RunPod API key and Siraaj instance credentials
# Configure your models
# Edit config/pods_config.json with your pod specifications
# Test manually
python -m src.main --start # Create pods and update Siraaj
python -m src.main --stop # Terminate pods (keeps Siraaj configs)
python -m src.main --status # Check status and current configurations
# Run scheduled daemon (starts pods at 6 AM, stops at 6 PM daily)
python -m src.main --daemon
Architecture
RunPod API ←→ RunPod Manager ←→ Scheduler ←→ Config Manager ←→ Siraaj API
↓ ↓ ↓ ↓ ↓
Create/Stop IP/Port Scheduling REST API LLM Providers
Pods Extraction (Cron Jobs) Updates AI Meet Configs
Configuration Files
.env- API keys, scheduling times, and Siraaj instance credentialsconfig/pods_config.json- Pod specifications with Siraaj integration settings
Docker Deployment
# Build and run with docker-compose
docker-compose up -d
# Or build manually
docker build -t runpod-scheduler .
docker run -d --env-file .env runpod-scheduler --daemon
Usage Examples
# Development workflow
python -m src.main --start # Start pods for development
python -m src.main --status # Check if pods are ready
# ... do your ML work ...
python -m src.main --stop # Stop pods to save money
# Production deployment
python -m src.main --daemon # Run continuously with scheduled start/stop
For detailed setup, configuration, and troubleshooting, see ML_ENGINEER_GUIDE.md.