LTX-Video is a fast local video model that can quickly produce high-quality video. The model has an image-to-video mode, which can turn a still image into a video. Naturally, the Flux model is the best choice for generating that initial still image. This approach combines Flux’s excellent image quality with a fast video generation workflow.…
Blog
Fast Local video: LTX Video
LTX Video is a fast, local video AI model full of potential. The Diffusion Transformer (DiT) Video model supports generating videos from text alone or with an input image. The model is small, with only 2 billion parameters. As a result, it only requires 6 GB VRAM and generates a 4-second video in 20 secs…
How to use Flux.1 Fill model for inpainting
Using the standard Flux checkpoint for inpainting is not ideal. You must carefully adjust the denoising strength. Setting it too high causes inconsistency, and setting it too low does not change anything. The real bummer in this intermediate denoising strength is that the new content won’t deviate too much from the original color. In this post,…
AnimateDiff dance transfer (ComfyUI)
Do you have any artistic ideas for creating a dancing object? You can easily create and quickly create one using this ComfyUI workflow. This example workflow transform a dance video to a dancing spaghetti. You must be a member of this site to download this workflow. Table of ContentsSoftwareHow this workflow worksStep-by-step guideStep 0: Update…
Mochi movie video workflow (ComfyUI)
Mochi is a new state-of-the-art local video model for generating short clips. What if you want to tell a story by chaining a few together? You can easily do that with this Mochi movie workflow, which generates and combines 4 Mochi video clips to form a long video. The movie is generated from text and…
Local image-to-video with CogVideoX
Local AI video has gone a long way since the release of the first local video model. The quality is much higher, and you can now control the video generation with both an image and a prompt. In this tutorial, I will show you how to set up and run the state-of-the-art video model CogVideoX…
How to run Mochi1 text-to-video on ComfyUI
Mochi1 is one of the best video AI models you can run locally on a PC. It turns your text prompt into a 480p video. In this tutorial, I will show you how to install and run the Mochi 1 model in ComfyUI. I will cover: Table of ContentsSoftwareMochi AIWhat is Mochi1?LicenseVersionsUse Mochi1 on ComfyUIStep…
Animated face art (ComfyUI)
This workflow generates an animated face video in a polygon art style. You need to be a member of this site to download the ComfyUI workflow. Table of ContentsSoftwareStable Diffusion GUIHow this workflow worksReference image generationVideo generationPost-processingStep-by-step guideStep 1: Load the ComfyUI workflowStep 2: Install Missing nodesStep 3: Download modelsStep 4: Generate videoCustomizationStyle variationPartial generation…
AnimateDiff Morph Art Style Portrait Video (ComfyUI)
This workflow generates a morphing portrait video across four different art styles, like the one below. The styles are controlled by text prompts. You can fine-tune each style by changing its prompt, and the transition pattern can be adjusted. You will need to be a member of this site to download the ComfyUI workflow. Table…
Stable Diffusion 3.5 Medium model on ComfyUI
Stable Diffusion 3.5 Medium is an AI image model that runs on consumer-grade GPU cards. It has 2.6 billion parameters, substantially lower than SD 3.5 Large‘s 8 billion parameters. You should use SD 3.5 Medium over the Large model for a shorter generation time or a lower VRAM requirement. It is also the first Stable…