The Hunyuan Video is the one of the highest quality video models that can be run on a local PC. The versatile model supports generating videos from text, direct the video using a reference image, LoRA finetuning, and image-to-video.
The only issue to most users is that it is quite slow. In this tutorial, I will show you how to speed up Hunyuan image-to-video 3 times with minimal loss in quality in ComfyUI.
Table of Contents
Software
We will use ComfyUI, an alternative to AUTOMATIC1111. You can use it on Windows, Mac, or Google Colab. If you prefer using a ComfyUI service, Think Diffusion offers our readers an extra 20% credit.
Read the ComfyUI beginner’s guide if you are new to ComfyUI. See the Quick Start Guide if you are new to AI images and videos.
Take the ComfyUI course to learn how to use ComfyUI step by step.
Running on Google Colab
If you use my ComfyUI Colab notebook, you don’t need to install the model files. They will be downloaded automatically.
Select the HunyuanVideo models before starting the notebook.

In the top menu, select Runtime > Change runtime type > L4 GPU. Save the settings.

Download the workflow JSON file from this tutorial and drop it to ComfyUI.
Teacache speedup
TeaCache takes advantage of the observation that some neural network blocks don’t do much during sampling. Researchers have recognized that diffusion models generate image outlines in the initial sampling steps and fill in details in the late steps.

TeaCache intelligently determines when to use caches during sampling. It uses the cached output when the current input is similar to that produced the cache. It only recomputes the cache when the input becomes substantially different. You can control how often the cache is recomputed by a threshold value.
Hunyuan image-to-video Teacache
This workflow uses an input image as the initial frame and generates an MP4 video.
It uses the ComfyUI TTP Toolset to speed up Hunyuan Video. Time to generate a 720p (1280×720 pixels) video on my RTX 4090 (24 GB VRAM) are:
Teacache Setting | Generation time |
---|---|
1.0x | 16 mins |
1.6x | 11 mins |
4.4x | 5 mins |
Teacache 1.0x (No speed up):
Teacache 1.6x:
Teacache 4.4x:
Step 0: Update ComfyUI
Before loading the workflow, make sure your ComfyUI is up-to-date. The easiest way to do this is to use ComfyUI Manager.
Click the Manager button on the top toolbar.

Select Update ComfyUI.

Restart ComfyUI.
Step 1: Download models
You already have these models if you followed my previous Hunyuan image-to-video tutorial.
Download hunyuan_video_image_to_video_720p_bf16.safetensors and put it in ComfyUI > models > diffusion_models.
Download clip_l.safetensors and llava_llama3_fp8_scaled.safetensors. Put them in ComfyUI > models > text_encoders.
Download hunyuan_video_vae_bf16.safetensors and put it in ComfyUI > models > vae.
Download llava_llama3_vision.safetensors and put it in ComfyUI > models > clip_vision.
Step 2: Load workflow
Download the Hunyuan video workflow JSON file below.
Drop it to ComfyUI.
Step 3: Install missing nodes
If you see red blocks, you don’t have the custom node that this workflow needs.
Click Manager > Install missing custom nodes and install the missing nodes.
Restart ComfyUI.
Step 4: Upload the input image
Upload an image you wish to use as the video’s initial frame. You can download my test image for testing.

Step 5: Revise prompt
Revise the prompt to what you want to generate.

Step 6: Generate a video
Click the Queue button to generate the video.

Tips
Change the noise_seed value to generate a different video.

If you see quality issues, reduce the speedup value in TeaCache HunyuanVideo Sampler.

When installing missing nodes in the workflow, I cant seem to install TeaCacheHunyuanVideoSampler. Its not listed in the Manager under nodes.
Searched under Custom Node Manager as well.
Me too.
I installed the comfyui_ttp_toolset cutome node via ComfyUI_Manager, and it seems it is imported successfully, base on the terminal.
However, There’s no “TeaCacheHunyanVideoSampler”