Images speak volumes. They express what words cannot capture, such as style and mood. That’s why the Image prompt adapter (IP-Adapter) in Stable Diffusion is so powerful. Now, you can also use the same with a Flux model. In this post, I will share 3 workflows for using image prompts with Flux.1 Dev model. Table of ContentsSoftwareWhat…
Blog
How to outpaint with Flux Fill model
The Flux Fill model is an excellent choice for inpainting. Do you know it works equally well for outpainting (extending an image)? In this tutorial, I will show you how to use the Flux Fill model for outpainting in ComfyUI. Table of ContentsSoftwareWhat is the Flux Fill model?ModelLicenseVRAM requirementOutpainting using Flux vs Flux Fill modelStep-by-step…
Flux-LTX Text-to-Video workflow (ComfyUI)
LTX-Video is a fast local video model that can quickly produce high-quality video. The model has an image-to-video mode, which can turn a still image into a video. Naturally, the Flux model is the best choice for generating that initial still image. This approach combines Flux’s excellent image quality with a fast video generation workflow.…
Fast Local video: LTX Video
LTX Video is a fast, local video AI model full of potential. The Diffusion Transformer (DiT) Video model supports generating videos from text alone or with an input image. The model is small, with only 2 billion parameters. As a result, it only requires 6 GB VRAM and generates a 4-second video in 20 secs…
How to use Flux.1 Fill model for inpainting
Using the standard Flux checkpoint for inpainting is not ideal. You must carefully adjust the denoising strength. Setting it too high causes inconsistency, and setting it too low does not change anything. The real bummer in this intermediate denoising strength is that the new content won’t deviate too much from the original color. In this post,…
AnimateDiff dance transfer (ComfyUI)
Do you have any artistic ideas for creating a dancing object? You can easily create and quickly create one using this ComfyUI workflow. This example workflow transform a dance video to a dancing spaghetti. You must be a member of this site to download this workflow. Table of ContentsSoftwareHow this workflow worksStep-by-step guideStep 0: Update…
Mochi movie video workflow (ComfyUI)
Mochi is a new state-of-the-art local video model for generating short clips. What if you want to tell a story by chaining a few together? You can easily do that with this Mochi movie workflow, which generates and combines 4 Mochi video clips to form a long video. The movie is generated from text and…
Local image-to-video with CogVideoX
Local AI video has gone a long way since the release of the first local video model. The quality is much higher, and you can now control the video generation with both an image and a prompt. In this tutorial, I will show you how to set up and run the state-of-the-art video model CogVideoX…
How to run Mochi text-to-video on ComfyUI
Mochi is one of the best video AI models you can run locally on a PC. It turns your text prompt into a 480p video. In this tutorial, I will show you how to install and run the Mochi 1 model in ComfyUI. I will cover: Table of ContentsSoftwareMochi AI What is Mochi?LicenseVersionsFP8 vs FP16Mochi…
Animated face art (ComfyUI)
This workflow generates an animated face video in a polygon art style. You need to be a member of this site to download the ComfyUI workflow. Table of ContentsSoftwareStable Diffusion GUIHow this workflow worksReference image generationVideo generationPost-processingStep-by-step guideStep 1: Load the ComfyUI workflowStep 2: Install Missing nodesStep 3: Download modelsCheckpoint modelsLoRA modelIP adapterAnimateDiffUpscalerStep 4: Generate…