Turn text prompts into viral YouTube Shorts — fully automated, runs daily.
ai-shorts-lab is an end-to-end pipeline that takes a JSON storyboard of scene prompts and automatically generates, captions, formats, and uploads YouTube Shorts — no manual video editing required.
Powered by WanGP (Wan2.1 14B), FFmpeg, and the YouTube Data API v3.
All videos on @GoooogleAashaan are generated entirely by this pipeline — no stock footage, no manual editing.
Storyboard JSON
│
▼
WanGP (Wan2.1 14B) ← generates each scene as a video clip
│
▼
FFmpeg concat ← joins scene clips into one video
│
▼
Caption overlay ← adds text captions with FFmpeg drawtext
│
▼
Portrait conversion ← converts 832×480 landscape → 1080×1920 (9:16)
(blurred background) with blurred background fill
│
▼
YouTube upload ← uploads as public Short via YouTube API
- Text-to-video: Write scene prompts → get a complete Short
- Auto-captions: Scene text overlaid on video
- Portrait conversion: Automatic landscape → 1080×1920 Shorts format
- YouTube API integration: Uploads directly, sets title/description/tags/privacy
- Daily automation: Run on a schedule, report results
- Low-VRAM mode: Works on 6GB GPUs with env overrides
- Batch generation: Generate and upload multiple videos in one run
- OS: Windows 10/11 (WanGP requires Windows + CUDA)
- GPU: NVIDIA with 8GB+ VRAM (6GB works — see Low-VRAM Mode)
- Python: 3.10+
- FFmpeg: Installed and in
PATH— download here - WanGP: Installed separately — see WanGP Setup below
- YouTube Data API v3 credentials — see YouTube Auth
git clone https://github.com/lijinlar/ai-shorts-lab
cd ai-shorts-lab
python -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txtAll user-specific paths live in config.yaml. Copy the example and fill in your paths:
cp config.example.yaml config.yamlThen edit config.yaml:
# Path to your local WanGP installation
wangp:
dir: "C:/path/to/Wan2GP"
# Path to your YouTube OAuth2 client credentials JSON
# Download from: Google Cloud Console → APIs & Services → Credentials → OAuth 2.0 Client ID
youtube:
oauth_credentials: "C:/path/to/youtube.oauth.json"
config.yamlis gitignored — never committed.config.example.yamlis the safe template.
WanGP is the video generation backend. It runs locally on your GPU using the Wan2.1 model.
git clone https://github.com/deepbeepmeep/Wan2GP
cd Wan2GP
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txtWanGP will prompt you to download the model on first run, or you can download manually:
- Wan2.1-T2V-14B (recommended, ~30GB) — best quality, needs 16GB VRAM or 8GB with CPU offload
- Wan2.1-T2V-1.3B (lightweight, ~5GB) — runs on 6GB VRAM, faster but lower quality
Follow the WanGP model download guide for exact steps.
By default, ai-shorts-lab expects WanGP at:
C:\Users\<YourUsername>\.wan2gp\Wan2GP\
To use a different path, edit WAN2GP_DIR at the top of scripts/generate_shorts_wangp.py:
WAN2GP_DIR = Path(r"C:\path\to\your\Wan2GP")Test that WanGP generates a video before running the full pipeline:
cd Wan2GP
venv\Scripts\activate
python wgp.pyThis opens the WanGP UI. Generate a test clip to confirm your GPU setup is working.
- Go to Google Cloud Console
- Create a new project
- Enable the YouTube Data API v3
- Go to APIs & Services → Credentials
- Create OAuth 2.0 Client ID → Application type: Desktop app
- Download the JSON file — save it anywhere on your machine (e.g.
C:/secrets/youtube.oauth.json) - Set the path in your
config.yaml:youtube: oauth_credentials: "C:/secrets/youtube.oauth.json"
python scripts/youtube_auth.py --channel mainThis opens a browser window to authorize your YouTube account. The token is saved to out/youtube_token.json and reused for future runs.
For multiple channels:
python scripts/youtube_auth.py --channel dogs # saves out/youtube_token_dogs.json
python scripts/youtube_auth.py --channel main # saves out/youtube_token.jsonStoryboards are JSON files that define what to generate. See storyboards/example.json for a full example.
{
"default": {
"sceneSeconds": 3,
"fps": 24,
"width": 480,
"height": 832,
"backend": "wangp",
"upscale4k": false
},
"videos": [
{
"title": "Dog Reunites With Owner After 18 Months 😭 #shorts",
"description": "Emotional dog reunion. #shorts #dog",
"scenes": [
{
"prompt": "Extreme close-up: a golden retriever's nose twitches near a front door, amber light, static camera. Ultra-realistic, photorealistic, cinematic 4K, shallow depth of field, smooth motion, sharp focus"
},
{
"prompt": "Medium wide: soldier drops duffel bag at door, retriever freezes with ears perked, tail wagging, warm hallway light, slow push-in. Ultra-realistic, photorealistic, cinematic 4K, shallow depth of field, smooth motion, sharp focus"
}
]
}
]
}Each scene prompt should include:
- Shot type —
Extreme close-up,Medium wide,Low angle, etc. - Subject detail — breed, color, expression, body language
- Action — precise movement with speed/direction
- Environment — lighting, surfaces, time of day
- Camera motion —
static,slow push-in,tracking shot, etc. - Quality suffix — always end with:
Ultra-realistic, photorealistic, cinematic 4K, shallow depth of field, smooth motion, sharp focus
python scripts/full_daily_pipeline.py --storyboard storyboards/example.jsonpython scripts/full_daily_pipeline.py --storyboard storyboards/example.json --privacy unlistedpython scripts/generate_shorts_wangp.py storyboards/example.json out/my_video.mp4Set up a Windows Task Scheduler task or any cron-compatible scheduler to run daily:
cd C:\path\to\ai-shorts-lab
.venv\Scripts\activate
python scripts/full_daily_pipeline.py --storyboard storyboards/todays_storyboard.jsonIf you have a 6GB GPU or hit CUDA out-of-memory errors, set these environment variables before running:
PowerShell:
$env:WANGP_MODEL_TYPE = "t2v_1.3B"
$env:WANGP_STEPS = "10"
$env:WANGP_CFG = "4.0"Command Prompt:
set WANGP_MODEL_TYPE=t2v_1.3B
set WANGP_STEPS=10This switches to the 1.3B parameter model and reduces inference steps — much lower VRAM usage, slightly lower quality.
ai-shorts-lab/
├── scripts/
│ ├── full_daily_pipeline.py # Main entry point — runs the full pipeline
│ ├── generate_shorts_wangp.py # Core: storyboard → video via WanGP
│ ├── batch_generate_upload_series_wangp.py # Batch multi-video generation
│ ├── add_captions.py # FFmpeg caption overlay
│ ├── concat_scenes.py # FFmpeg scene concatenation
│ ├── convert_to_shorts_format.py # Landscape → 1080×1920 portrait
│ ├── youtube_upload.py # YouTube API upload
│ ├── youtube_auth.py # OAuth2 token setup
│ ├── youtube_analytics_report.py # Performance analytics
│ ├── combine_and_upload.py # Combine scenes + upload in one step
│ ├── wangp_generate_scene.py # Low-level WanGP scene generator
│ ├── auto_process_wangp.py # WanGP queue auto-processor
│ ├── daily_youtube_automation.py # Alternate daily automation entry
│ └── archive/ # Experimental/superseded scripts
├── storyboards/
│ └── example.json # Example storyboard — start here
├── out/ # Generated videos and tokens (gitignored)
├── requirements.txt
└── README.md
-
WanGP always outputs 832×480 — the model generates landscape video regardless of the
width/heightparams in the storyboard. The pipeline automatically converts to 1080×1920 portrait using a blurred background fill. This is expected behavior. -
No audio by default — WanGP generates silent video. Add background music separately via FFmpeg if needed.
-
GPU memory spikes — each scene is generated independently to avoid OOM. Large frame counts (>81 frames) can still crash on 8GB GPUs.
-
YouTube Shorts classification — YouTube automatically classifies videos as Shorts if they are ≤60 seconds and in portrait (9:16) format. Adding
#shortsto the title helps.
MIT — free to use, modify, and build on.
- Video generation: WanGP / Wan2.1 by deepbeepmeep
- YouTube API: Google YouTube Data API v3
- Video processing: FFmpeg