Skip to content

CodeNexa/Lilo

Repository files navigation

Lilo — CreatorCommand (Production-ready packaging)

Lilo — Full Prototype (OpenAI + React + Celery Scheduler)

This prototype extends the earlier Lilo project with:

  • React frontend (Vite + React).
  • FastAPI backend with OpenAI integration (uses OPENAI_API_KEY env var).
  • Celery scheduler using Redis broker/backend, designed to support scheduling up to 1000 tasks.
  • Dockerfiles and docker-compose.yml to run backend, frontend, redis, and worker(s).
  • SQLite for lightweight metadata storage (schedules table).

IMPORTANT: This is a prototype. Replace placeholder logic and secure secrets before production.

Quick start (requires Docker & Docker Compose)

  1. Copy your OpenAI API key into a .env file in the project root:
    OPENAI_API_KEY=sk-...
    BACKEND_PORT=8000
    FRONTEND_PORT=5173
  2. Build and run everything:
    docker-compose up --build
  3. Open the React frontend: http://localhost:5173 API docs (backend): http://localhost:8000/docs Flower (Celery monitor): http://localhost:5555

Architecture notes

  • Scheduler: Celery with Redis broker and result backend. When scheduling a post, the backend enqueues a Celery task with an ETA. Celery workers will execute tasks (simulate publish) at the scheduled time.
  • 1000-task capacity: For support up to 1000 scheduled jobs, configure Redis persistence, and run multiple worker replicas. The compose file defines a worker service that can be scaled (e.g., docker-compose up --scale worker=3). For reliability use a persistent Redis instance and production-grade broker (Redis cluster or RabbitMQ) for very large workloads.
  • OpenAI integration: The endpoint /generate/hashtags uses OpenAI's API to create hashtag suggestions. It gracefully falls back to heuristics if key is not provided.

Files included

  • backend/: FastAPI app, Celery tasks, Dockerfile
  • frontend/: Vite + React app (simple UI)
  • docker-compose.yml, .env.example, README.md

Next steps for production

  • Use managed Redis or RabbitMQ for high reliability.
  • Add OAuth for platform posting and secure token storage (Vault).
  • Add rate limiting and robust retry/error handling for publishing tasks.
  • Use PostgreSQL for transactional storage and Celery results if required.

OAuth stubs for TikTok & YouTube

The backend contains example OAuth endpoints:

  • /auth/tiktok -> Redirects to TikTok's authorization page (placeholder).
  • /auth/tiktok/callback -> Receives the code and would exchange it for tokens.
  • /auth/youtube -> Redirects to Google's OAuth consent screen for YouTube scopes.
  • /auth/youtube/callback -> Receives the code for token exchange.

To enable real OAuth:

  1. Register your app on TikTok for Developers and Google Cloud Console (YouTube Data API).
  2. Set the client IDs, secrets, and redirect URIs as environment variables (e.g., TIKTOK_CLIENT_ID, TIKTOK_CLIENT_SECRET, TIKTOK_REDIRECT_URI, YOUTUBE_CLIENT_ID, YOUTUBE_CLIENT_SECRET, YOUTUBE_REDIRECT_URI).
  3. Implement secure token exchange and storage (do not store secrets in plaintext).

Scaling workers (recommended)

To scale workers for large scheduled workloads (e.g., 1000+ pending jobs), use:

docker-compose -f docker-compose.yml -f docker-compose.override.yml up --build --scale worker=3

Adjust --scale worker=N according to CPU and memory. Monitor using Flower at http://localhost:5555.

Production deployment with Nginx (HTTP-only, TLS-ready)

This setup uses an Nginx container to:

  • Serve the static React build.
  • Reverse-proxy /api/ to the backend (FastAPI).
  • Reverse-proxy /flower/ to the Celery Flower dashboard.

Local production build & run (recommended free-tier friendly)

  1. Ensure Docker & Docker Compose are installed.
  2. Build frontend and copy to Nginx:
    ./scripts/build_prod.sh
    The script builds the React app and copies frontend/dist to nginx/dist, then runs docker-compose up --build.
  3. Open http://localhost in your browser. API is available at http://localhost/api/.

TLS

  • This configuration listens on port 80 (HTTP). It's TLS-ready: to enable HTTPS on a public server, use a reverse proxy (host machine) to terminate TLS (Let's Encrypt / Certbot), or modify the nginx service to include Certbot and certificate mounting. For free deployments (e.g., a small VPS), use Certbot to obtain certificates and place them on the host, mounting them into the nginx container.

© Dipark Solutions

About

Content scheduler

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published