ComfyUI Desktop makes it easier than ever to run ComfyUI locally without messing with the command line. It’s the most beginner-friendly way to start creating AI images and videos locally. No Python setup, Git installation, and configuration headaches. If you just want to get started quickly, ComfyUI Desktop is the clear winner. The installation takes…
Blog
Wan 2.2 AIO Upscale workflow
Long-time member Heinz Zysset kindly shares this high-resolution text-to-video workflow. This workflow uses: Table of ContentsSoftware neededWan 2.2 upscale workflowStep 1: Download modelsStep 2: Download the workflowStep 3: Install missing nodesStep 4: Revise the promptStep 5: Generate a video Software needed Wan 2.2 upscale workflow Step 1: Download models Download the Wan 2.2 AIO model…
Qwen Image User Guide
Qwen Image is a new open-source text-to-image model developed by Alibaba’s Qwen team. It’s quickly gaining thumbs-up from AI creators. Unlike many closed models, Qwen Image is both flexible and accessible, making it a strong alternative to Stable Diffusion and Flux models. You can run it locally in ComfyUI with customization. In this article, I…
ControlNet ComfyUI workflows
You can use a reference image to direct AI image generation using ControlNet. Below is an example of copying the pose of the girl on the left to generate a picture of a warrior on the right. In this pose, I will go through: Table of ContentsSoftwareWhat is ControlNet?ConditioningPreprocessorComfyUI ControlNet workflowsControlNet SD 1.5ControlNet SDXLControlNet FluxAdvanced…
Wan 2.2 First Last Frame Video
This Wan 2.2 image-to-video workflow lets you fix the first and last frames and generates a video connecting the two (FLF2V). See the example below. Input images: Output video: I will provide instructions for the following two workflows: Table of ContentsSoftware neededComfyUI Colab NotebookWan 2.2 First Last Frame workflow (Fast 4-step LoRA)Step 0: Update ComfyUIStep…
Wan 2.2 text-to-image (workflow included)
Wan 2.2 is one of the best local video models. Generating high-quality videos is what it is known for. But if you set the video frame to 1, you get an image! It is a competent image model, thanks to training with diverse sets of videos, but only if you set up the parameters correctly.…
Video from text with Wan 2.2 local model
Wan 2.2 is a high-quality video AI model you can run locally on your computer. In this tutorial, I will cover: Table of ContentsSoftware neededComfyUI Colab NotebookText-to-video with the Wan 2.2 14B modelStep 0: Update ComfyUIStep 1: Load the workflowStep 2: Download modelsStep 3: Revise the promptStep 4: Generate the videoText-to-video with the Wan 2.2…
Turn an image into a video with Wan 2.2 local model
Wan 2.2 is a local video model that can turn text or images into videos. In this article, I will focus on the popular image-to-video function. The new Wan 2.2 models are surprisingly capable in motion and camera movements. (But not necessarily controlling them) This article will cover how to install and run the following…
Turn any image into Arcane style
Do you have a photo you want to turn it into an unique animation style? Long-time member Heinz Zysset kindly shares his stylization workflow to our site readers. It turns any images into the style of the Arcane animation series. Table of ContentsSoftware neededWorkflow overviewArcane stylization workflowStep 1: Download modelsStep 2: Download the workflowStep 3:…
How to use ComfyUI API nodes
ComfyUI is known for running local image and video AI models. Recently, it added support for running proprietary close models through API. As of writing, you can use popular models from Kling, Google Veo, OpenAI, RunwayML, and Pika, among others. In this article, I will show you how to set up and use ComfyUI API…