This workflow generates four video clips and combines them into a single video. To improve the quality and control of each clip, the initial frame is generated with the Flux AI image model, followed by Wan 2.1 Video with Teacache speed up. You can run it locally or using a ComfyUI service. You must be…
Blog
Stylize photos with ChatGPT
Do you know you can use ChatGPT to stylize photos? This free, straightforward method yields impressive results. In this tutorial, I will show you how to convert an image into different styles like these: Table of ContentsHow does it workAlternativesStep-by-step guideStep 1: Upload a photoStep 2: Enter a promptStep 3: Start image conversionOther style promptsStudio…
How to generate OmniHuman-1 lip sync video
Lip sync is notoriously tricky to get right with AI because we naturally talk with body movement. OmniHuman-1 is a human video generation model that can generate lip sync videos from a single image and an audio clip. The motion is highly realistic and matches the voice. OmniHuman-1 is currently only available through an online…
How to create FramePack videos on Google Colab
FramePack is a video generation method that allows you to create long AI videos with limited VRAM. If you don’t have a decent Nvidia GPU card, you can use FramePack on the Google Colab online service. It’s a cost-effective option, costing only around $0.20 per hour to use. Table of ContentsWhat is FramePack?What is Google…
How to create videos with Google Veo 2
You can now use Veo 2, Google’s AI-powered video generation model, on Google AI Studio. It supports text-to-image and, more importantly, image-to-video. You can generate a high-resolution (720p) video clip for up to 8 seconds using a simple prompt or optionally with an image. If you use online AI video services like Kling or Runway…
FramePack: long AI video with low VRAM
Framepack is a video generation method that consumes low VRAM (6 GB) regardless of the video length. It supports image-to-video, turning an image into a video with text instructions. In this tutorial, I will talk about: 5-second FramePack video: 10-second FramePack video: Table of ContentsWhat is FramePack?Frame packingAnti-drifting samplingVideo modelInstalling FramePack on WindowsStep 1: Install…
Flux Hunyuan Text-to-Video workflow (ComfyUI)
This workflow combines an image generation model (Flux) with a video generation model (Hunyuan). Here’s how it works: Benefits: You need to be a member of this site to download the ComfyUI workflow. Sci-fi spaceship generated using the Flux Hunyuan workflow Table of ContentsSoftwareWorkflow overviewStep-by-step guideStep 0: Update ComfyUIStep 1: Download the Flux AI modelStep…
Flux-Wan 2.1 video workflow (ComfyUI)
This workflow generates beautiful videos of mechanical insects from text prompts. You can run it locally or using a ComfyUI service. It uses Flux AI to generate a high-quality image, followed by Wan 2.1 Video for animation with Teacache speed up. You must be a member of this site to download the following ComfyUI workflow.…
Flux image copier
This ComfyUI workflow copies the input image and generates a new one with the Flux.1 Dev model. You can also add keywords to the prompt to modify the image. You must be a member of this site to download the following ComfyUI workflow. Table of ContentsSoftwareHow does this workflow work?Step-by-step guideStep 1: Load workflowStep 2:…
Speeding up Hunyuan Video 3x with Teacache
The Hunyuan Video is the one of the highest quality video models that can be run on a local PC. The versatile model supports generating videos from text, direct the video using a reference image, LoRA finetuning, and image-to-video. The only issue to most users is that it is quite slow. In this tutorial, I will show…