Skip to content

Lightricks/ComfyUI-LTXVideo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI-LTXVideo

GitHub Website Model LTXV Trainer Demo Paper Discord

A collection of powerful custom nodes that extend ComfyUI's capabilities for the LTX-2 video generation model.

LTX-2 is built into ComfyUI core (see it here), making it readily accessible to all ComfyUI users. This repository hosts additional nodes and workflows to help you get the most out of LTX-2's advanced features.

To learn more about LTX-2 See the main LTX-2 repository for model details and additional resources.

Prerequisites

Before you begin using an LTX-2 workflow in ComfyUI, make sure you have:

Quick Start 🚀

We recommend using the LTX-2 workflows available in Comfy Manager.

  1. Open ComfyUI
  2. Click the Manager button (or press Ctrl+M)
  3. Select Install Custom Nodes
  4. Search for “LTXVideo”
  5. Click Install
  6. Wait for installation to complete
  7. Restart ComfyUI

The nodes will appear in your node menu under the “LTXVideo” category. Required models will be downloaded on first use.

Example Workflows

The ComfyUI-LTXVideo installation includes several example workflows. You can see them all at:

ComfyUI/custom_nodes/ComfyUI-LTXVideo/example_workflows/

Union IC-LoRA Model

We introduce a new Union IC-LoRA model that combines depth, pose, and edge control conditions into a single unified LoRA.

Key Features

  • Unified Control: A single LoRA that supports multiple control conditions (depth, human pose, or edges).
  • Downsampled Latent Processing: The union LoRA operates on a downsampled latent size, which reduces memory usage and significantly speeds up inference while maintaining quality.

How It Works

The union LoRA is trained to understand and respond to all three control signals (depth maps, pose skeletons, and edge maps) within a single model. The model learns to:

  1. Parse multiple conditions: Identify which control signals are present in the input
  2. Process at reduced resolution: Work on downsampled latents to improve efficiency

Required Models

Download the following models:

LTX-2 Model Checkpoint - Choose and download one of the models to COMFYUI_ROOT_FOLDER/models/checkpoints folder.

Spatial Upscaler - Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.

Temporal Upscaler - Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.

Distilled LoRA - Required for current two-stage pipeline implementations in this repository (except DistilledPipeline and ICLoraPipeline). Download to COMFYUI_ROOT_FOLDER/models/loras folder.

Gemma Text Encoder Download all files from the repository to COMFYUI_ROOT_FOLDER/models/text_encoders/gemma-3-12b-it-qat-q4_0-unquantized.

LoRAs Choose and download to COMFYUI_ROOT_FOLDER/models/loras folder.

Advanced Techniques

Low VRAM

  • For systems with low VRAM you can use the model loader nodes from low_vram_loaders.py. Those nodes ensure the correct order of execution and perform the model offloading such that generation fits in 32 GB VRAM.
  • Use --reserve-vram ComfyUI parameter: python -m main --reserve-vram 5 (or other number in GB).
  • For complete information about using LTX-2 models, workflows, and nodes in ComfyUI, please visit our Open Source documentation.