Flux-Wan 2.1 video workflow (ComfyUI)

This workflow generates beautiful videos of mechanical insects from text prompts. You can run it locally or using a ComfyUI service. It uses Flux AI to generate a high-quality image, followed by Wan 2.1 Video for animation with Teacache speed up. You must be a member of this site to download the following ComfyUI workflow.…

Flux image copier

This ComfyUI workflow copies the input image and generates a new one with the Flux.1 Dev model. You can also add keywords to the prompt to modify the image. You must be a member of this site to download the following ComfyUI workflow. Table of ContentsSoftwareHow does this workflow work?Step-by-step guideStep 1: Load workflowStep 2:…

Speeding up Hunyuan Video 3x with Teacache

The Hunyuan Video is the one of the highest quality video models that can be run on a local PC. The versatile model supports generating videos from text, direct the video using a reference image, LoRA finetuning, and image-to-video. The only issue to most users is that it is quite slow. In this tutorial, I will show…

How to speed up Wan 2.1 Video with Teacache and Sage Attention

Wan 2.1 Video is a state-of-the-art AI model that you can use locally on your PC. However, it does take some time to generate a high-quality 720p video, and it can take a lot of time to refine a video through multiple generations. This fast Wan 2.1 workflow uses Teache and Sage Attention to reduce…

How to use Wan 2.1 LoRA to rotate and inflate characters

Wan 2.1 Video is a generative AI video model that produces high-quality video on consumer-grade computers. Remade AI, an AI video company, has released some interesting special-purpose LoRA models for Wan 2.1 Video. The LoRAs create special effects for Wan 2.1 Video — they can rotate and squeeze a character, to name a few. In…

How to use LTX Video 0.9.5 on ComfyUI

LTX Video 0.9.5 is an improved version of the LTX local video model. The model is very fast — it generates a 4-second video in 17 seconds on a consumer-grade GPU RTX4090. It’s not quite real-time, but very close. In this article, I will cover: Table of ContentsSoftwareRunning on Google ColabLTXV 0.9.5 ImprovementsLicenseText-to-videoImage-to-videoFix the first…

How to run Hunyuan Image-to-video on ComfyUI

The Hunyuan Video model has been a huge hit in the open-source AI community. It can generate high-quality videos from text, direct the video using a reference image, and modify the model with LoRA. It only missed the image-to-video function like the LTX image-to-video. The good news is that the Hunyuan Image-to-Video model is now available! Read…

How to run Wan 2.1 Video on ComfyUI

Wan 2.1 Video is a series of open foundational video models. It supports a wide range of video-generation tasks. It can turn images or text descriptions into videos at 480p or 720p resolutions. In this post: Table of ContentsSoftwareRunning on Google ColabWan 2.1 sample videos720p videos480p videosWhat are the Wan 2.1 models?Model architectureHighlightsWan 2.1 Image-to-video…

How to create text effect in ComfyUI

This workflow uses an SDXL model with the CPDS ControlNet to blend text seamlessly with images. You can directly enter the text in the workflow. You must be a member of this site to download the following ComfyUI workflow. Table of ContentsSoftwareHow does this workflow work?Step-by-step guideStep 1: Load workflowStep 2: Install missing nodesStep 3:…

CodeFormer: Enhancing facial detail in ComfyUI

CodeFormer is a robust face restoration tool that enhances facial features, making them more realistic and detailed. Integrating CodeFormer into ComfyUI allows you to improve facial quality seamlessly within their workflows. This tutorial will guide you through installing and using CodeFormer in ComfyUI. Table of ContentsSoftwareCodeFormerDemonstrationStandalone Face restorationStep 1: Download the CodeFormer ModelStep 2: Load…