How to create videos with Google Veo 2

How to create videos with Google Veo 2

You can now use Veo 2, Google's AI-powered video generation model, on Google AI Studio. It supports text-to-image and, more ...
FramePack: long AI video with low VRAM

FramePack: long AI video with low VRAM

Framepack is a video generation method that consumes low VRAM (6 GB) regardless of the video length. It supports image-to-video, ...
Mechanical insect video (ComfyUI)

Mechanical insect video (ComfyUI)

This workflow generates beautiful videos of mechanical insects from text prompts. You can run it locally or using a ComfyUI ...
Flux image copier

Flux image copier

This ComfyUI workflow copies the input image and generates a new one with the Flux.1 Dev model. Input Copied You ...
Speeding up Hunyuan Video 3x with Teacache

Speeding up Hunyuan Video 3x with Teacache

The Hunyuan Video is the one of the highest quality video models that can be run on a local PC ...
How to speed up Wan 2.1 Video with Teacache and Sage Attention

How to speed up Wan 2.1 Video with Teacache and Sage Attention

Wan 2.1 Video is a state-of-the-art AI model that you can use locally on your PC. However, it does take ...
How to use Wan 2.1 LoRA to rotate and inflate characters

How to use Wan 2.1 LoRA to rotate and inflate characters

Wan 2.1 Video is a generative AI video model that produces high-quality video on consumer-grade computers. Remade AI, an AI ...
How to use LTX Video 0.9.5 on ComfyUI

How to use LTX Video 0.9.5 on ComfyUI

LTX Video 0.9.5 is an improved version of the LTX local video model. The model is very fast — it ...
How to run Hunyuan Image-to-video on ComfyUI

How to run Hunyuan Image-to-video on ComfyUI

The Hunyuan Video model has been a huge hit in the open-source AI community. It can generate high-quality videos from ...
How to run Wan 2.1 Video on ComfyUI

How to run Wan 2.1 Video on ComfyUI

Wan 2.1 Video is a series of open foundational video models. It supports a wide range of video-generation tasks. It ...