WAN 2.1 VACE (Video All-in-One Creation and Editing) is a video generation and editing model developed by the Alibaba team. It unifies text-to-video, reference-to-video (reference-guided generation), video-to-video (pose and depth control), inpainting, and outpainting under a single framework. VACE supports the following core functions: You can use the WAN VACE model in ComfyUI with the…
Blog
How to run LTX Video 13B on ComfyUI (image-to-video)
LTX Video is a popular local AI model known for its generation speed and low VRAM usage. The LTXV-13B model has 13 billion parameters, a 6-fold increase over the previous 2B model. This translates to better details, prompt adherence, and more coherent videos. In this tutorial, I will show you how to install and run…
Flux-Wan 2.1 four-clip movie (ComfyUI)
This workflow generates four video clips and combines them into a single video. To improve the quality and control of each clip, the initial frame is generated with the Flux AI image model, followed by Wan 2.1 Video with Teacache speed up. You can run it locally or using a ComfyUI service. You must be…
Stylize photos with ChatGPT
Do you know you can use ChatGPT to stylize photos? This free, straightforward method yields impressive results. In this tutorial, I will show you how to convert an image into different styles like these: Table of ContentsHow does it workAlternativesStep-by-step guideStep 1: Upload a photoStep 2: Enter a promptStep 3: Start image conversionOther style promptsStudio…
How to generate OmniHuman-1 lip sync video
Lip sync is notoriously tricky to get right with AI because we naturally talk with body movement. OmniHuman-1 is a human video generation model that can generate lip sync videos from a single image and an audio clip. The motion is highly realistic and matches the voice. OmniHuman-1 is currently only available through an online…
How to create FramePack videos on Google Colab
FramePack is a video generation method that allows you to create long AI videos with limited VRAM. If you don’t have a decent Nvidia GPU card, you can use FramePack on the Google Colab online service. It’s a cost-effective option, costing only around $0.20 per hour to use. Table of ContentsWhat is FramePack?What is Google…
How to create videos with Google Veo 2
You can now use Veo 2, Google’s AI-powered video generation model, on Google AI Studio. It supports text-to-image and, more importantly, image-to-video. You can generate a high-resolution (720p) video clip for up to 8 seconds using a simple prompt or optionally with an image. If you use online AI video services like Kling or Runway…
FramePack: long AI video with low VRAM
Framepack is a video generation method that consumes low VRAM (6 GB) regardless of the video length. It supports image-to-video, turning an image into a video with text instructions. In this tutorial, I will talk about: 5-second FramePack video: 10-second FramePack video: Table of ContentsWhat is FramePack?Frame packingAnti-drifting samplingVideo modelInstalling FramePack on WindowsStep 1: Install…
Flux Hunyuan Text-to-Video workflow (ComfyUI)
This workflow combines an image generation model (Flux) with a video generation model (Hunyuan). Here’s how it works: Benefits: You need to be a member of this site to download the ComfyUI workflow. Sci-fi spaceship generated using the Flux Hunyuan workflow Table of ContentsSoftwareWorkflow overviewStep-by-step guideStep 0: Update ComfyUIStep 1: Download the Flux AI modelStep…
Flux-Wan 2.1 video workflow (ComfyUI)
This workflow generates beautiful videos of mechanical insects from text prompts. You can run it locally or using a ComfyUI service. It uses Flux AI to generate a high-quality image, followed by Wan 2.1 Video for animation with Teacache speed up. You must be a member of this site to download the following ComfyUI workflow.…