LTX Video is a popular local AI model known for its generation speed and low VRAM usage. The LTXV-13B model has 13 billion parameters, a 6-fold increase over the previous 2B model. This translates to better details, prompt adherence, and more coherent videos.
In this tutorial, I will show you how to install and run the LTXV-13B img2vid on ComfyUI.
Table of Contents
Software
We will use ComfyUI, an alternative to AUTOMATIC1111. You can use it on Windows, Mac, or Google Colab. If you prefer using a ComfyUI service, Think Diffusion offers our readers an extra 20% credit.
Read the ComfyUI beginner’s guide if you are new to ComfyUI. See the Quick Start Guide if you are new to AI images and videos.
Take the ComfyUI course to learn how to use ComfyUI step by step.
Improvement in LTXV 13B model
The LTXV 13B model is a huge step from the 2B model.
- Higher quality: The quality of the 13B model is noticeably higher than the 2B model.
- Speed is not bad: It takes under 3 minutes to generate a 4-second video on my RTX4090. It is slower than the 2B model, but it is still fast enough.
Alternative models
You may also consider the following models, which can turn an image into a video.
- Wan 2.1 Video: A workhorse in local image-to-video
- Hunyuan Video: Another high-quality choice
- FramePack: long AI video with low VRAM
- LTX Video 2B: The smaller version of LTXV. It is faster, but the quality is lower
LTX Video 13B Image-to-video workflow
Step 1: Download the workflow
Download the ComfyUI JSON workflow below.
Drag and drop the JSON file to ComfyUI.
Step 2: Install missing nodes
If you see nodes with red borders, you don’t have the custom nodes required for this workflow. You should have ComfyUI Manager installed before performing this step.
Click Manager > Install Missing Custom Nodes.

Install the nodes that are missing.
Restart ComfyUI.
Refresh the ComfyUI page.
Step 3: Download model files
Download the LTXV 13B model ltxv-13b-0.9.7-dev.safetensors. Put it in ComfyUI > models > checkpoints.
Download t5xxl_fp16.safetensors and put it in ComfyUI > models > text_encoders.
Now, you have installed all the software and models to run the workflow!
Step 4: Upload an image
The video model animates the first frame you upload. Upload an image to the Load Image canvas.

You can use the test image below.

Step 5: Run the workflow
Click the Run button to run the workflow.

Reference
- ComfyUI Custom Node: ComfyUI-LTXVideo: LTX-Video Support for ComfyUI
- Official repository for LTX-Video