How to create videos with Google Veo 2

You can now use Veo 2, Google’s AI-powered video generation model, on Google AI Studio. It supports text-to-image and, more importantly, image-to-video. You can generate a high-resolution (720p) video clip for up to 8 seconds using a simple prompt or optionally with an image. If you use online AI video services like Kling or Runway…

FramePack: long AI video with low VRAM

Framepack is a video generation method that consumes low VRAM (6 GB) regardless of the video length. It supports image-to-video, turning an image into a video with text instructions. In this tutorial, I will talk about: 5-second FramePack video: 10-second FramePack video: Table of ContentsWhat is FramePack?Frame packingAnti-drifting samplingVideo modelInstalling FramePack on WindowsStep 1: Install…

Flux Hunyuan Text-to-Video workflow (ComfyUI)

This workflow combines an image generation model (Flux) with a video generation model (Hunyuan). Here’s how it works: Benefits: You need to be a member of this site to download the ComfyUI workflow. Sci-fi spaceship generated using the Flux Hunyuan workflow Table of ContentsSoftwareWorkflow overviewStep-by-step guideStep 0: Update ComfyUIStep 1: Download the Flux AI modelStep…

Mechanical insect video (ComfyUI)

This workflow generates beautiful videos of mechanical insects from text prompts. You can run it locally or using a ComfyUI service. It uses Flux AI to generate a high-quality image, followed by Wan 2.1 Video for animation with Teacache speed up. You must be a member of this site to download the following ComfyUI workflow.…

Flux image copier

This ComfyUI workflow copies the input image and generates a new one with the Flux.1 Dev model. You can also add keywords to the prompt to modify the image. You must be a member of this site to download the following ComfyUI workflow. Table of ContentsSoftwareHow does this workflow work?Step-by-step guideStep 1: Load workflowStep 2:…

Speeding up Hunyuan Video 3x with Teacache

The Hunyuan Video is the one of the highest quality video models that can be run on a local PC. The versatile model supports generating videos from text, direct the video using a reference image, LoRA finetuning, and image-to-video. The only issue to most users is that it is quite slow. In this tutorial, I will show…

How to speed up Wan 2.1 Video with Teacache and Sage Attention

Wan 2.1 Video is a state-of-the-art AI model that you can use locally on your PC. However, it does take some time to generate a high-quality 720p video, and it can take a lot of time to refine a video through multiple generations. This fast Wan 2.1 workflow uses Teache and Sage Attention to reduce…

How to use Wan 2.1 LoRA to rotate and inflate characters

Wan 2.1 Video is a generative AI video model that produces high-quality video on consumer-grade computers. Remade AI, an AI video company, has released some interesting special-purpose LoRA models for Wan 2.1 Video. The LoRAs create special effects for Wan 2.1 Video — they can rotate and squeeze a character, to name a few. In…

How to use LTX Video 0.9.5 on ComfyUI

LTX Video 0.9.5 is an improved version of the LTX local video model. The model is very fast — it generates a 4-second video in 17 seconds on a consumer-grade GPU RTX4090. It’s not quite real-time, but very close. In this article, I will cover: Table of ContentsSoftwareRunning on Google ColabLTXV 0.9.5 ImprovementsLicenseText-to-videoImage-to-videoFix the first…

How to run Hunyuan Image-to-video on ComfyUI

The Hunyuan Video model has been a huge hit in the open-source AI community. It can generate high-quality videos from text, direct the video using a reference image, and modify the model with LoRA. It only missed the image-to-video function like the LTX image-to-video. The good news is that the Hunyuan Image-to-Video model is now available! Read…