Tutorials

Stylize photos with ChatGPT

Stylize photos with ChatGPT

Do you know you can use ChatGPT to stylize photos? This free, straightforward method yields impressive results. In this tutorial, …
OmniHuman lip sync

How to generate OmniHuman-1 lip sync video

Lip sync is notoriously tricky to get right with AI because we naturally talk with body movement. OmniHuman-1 is a …
How to create FramePack videos on Google Colab

How to create FramePack videos on Google Colab

FramePack is a video generation method that allows you to create long AI videos with limited VRAM. If you don’t …
How to create videos with Google Veo 2

How to create videos with Google Veo 2

You can now use Veo 2, Google’s AI-powered video generation model, on Google AI Studio. It supports text-to-image and, more …
FramePack: long AI video with low VRAM

FramePack: long AI video with low VRAM

Framepack is a video generation method that consumes low VRAM (6 GB) regardless of the video length. It supports image-to-video, …
Speeding up Hunyuan Video 3x with Teacache

Speeding up Hunyuan Video 3x with Teacache

The Hunyuan Video is the one of the highest quality video models that can be run on a local PC …
How to speed up Wan 2.1 Video with Teacache and Sage Attention

How to speed up Wan 2.1 Video with Teacache and Sage Attention

Wan 2.1 Video is a state-of-the-art AI model that you can use locally on your PC. However, it does take …
How to use Wan 2.1 LoRA to rotate and inflate characters

How to use Wan 2.1 LoRA to rotate and inflate characters

Wan 2.1 Video is a generative AI video model that produces high-quality video on consumer-grade computers. Remade AI, an AI …
How to use LTX Video 0.9.5 on ComfyUI

How to use LTX Video 0.9.5 on ComfyUI

LTX Video 0.9.5 is an improved version of the LTX local video model. The model is very fast — it …
How to run Hunyuan Image-to-video on ComfyUI

How to run Hunyuan Image-to-video on ComfyUI

The Hunyuan Video model has been a huge hit in the open-source AI community. It can generate high-quality videos from …
How to run Wan 2.1 Video on ComfyUI

How to run Wan 2.1 Video on ComfyUI

Wan 2.1 Video is a series of open foundational video models. It supports a wide range of video-generation tasks. It …
codeformer face restoration in comfyui

CodeFormer: Enhancing facial detail in ComfyUI

CodeFormer is a robust face restoration tool that enhances facial features, making them more realistic and detailed. Integrating CodeFormer into …
3 ways to fix Queue button missing in ComfyUI

3 ways to fix Queue button missing in ComfyUI

Sometimes, the “Queue” button disappears in my ComfyUI for no reason. It may be due to glitches in the updated …
How to run Lumina Image 2.0 on ComfyUI

How to run Lumina Image 2.0 on ComfyUI

Lumina Image 2.0 is an open-source AI model that generates images from text descriptions. It excels in artistic styles and …
TeaCache: 2x speed up in ComfyUI

TeaCache: 2x speed up in ComfyUI

Do you wish AI to run faster on your PC? TeaCache can speed up diffusion models with negligible changes in …
How to use Hunyuan video LoRA to create consistent characters

How to use Hunyuan video LoRA to create consistent characters

Low-Rank Adaptation (LoRA) has emerged as a game-changing technique for finetuning image models like Flux and Stable Diffusion. By focusing …
Background removal using comfyui custom nodes

How to remove background in ComfyUI

Background removal is an essential tool for digital artists and graphic designers. It cuts clutter and enhances focus. You can …
How to direct Hunyuan video with an image

How to direct Hunyuan video with an image

Hunyuan Video is a local video model which turns a text description into a video. But what if you want …