ControlNet ComfyUI workflows

Published Categorized as Tutorial Tagged , , 1 Comment on ControlNet ComfyUI workflows

You can use a reference image to direct AI image generation using ControlNet. Below is an example of copying the pose of the girl on the left to generate a picture of a warrior on the right.

In this pose, I will go through:

  1. A brief overview of ControlNet
  2. ControlNet workflows for SD 1.5, SDXL, and Flux
  3. Settings for advanced controls

Software

We will use ComfyUI, a free AI image and video generator. You can use it on Windows, Mac, or Google Colab

Think Diffusion provides an online ComfyUI service. They offer an extra 20% credit to our readers.

Read the ComfyUI beginner’s guide if you are new to ComfyUI. See the Quick Start Guide if you are new to AI images and videos.

Take the ComfyUI course to learn how to use ComfyUI step by step.

What is ControlNet?

Using ControlNet in ComfyUI requires a solid understanding of how the AI models work. So I will give a brief overview here. Feel free to skip to the next section if you already know how ControlNet works.

Conditioning

In text-to-image, the prompt conditions the checkpoint model to create images that match the prompt. In the example below, the pose is diverse because there are many options to match the prompt.

Prompt: underwater photo of a beautiful retro 25 yo housewife in yellow tight leather dress, small fish

However, you may only want a specific pose. When using ControlNet, both the prompt and the control image condition the checkpoint model. All images follow the pose in the reference image.

Adding a depth ControlNet to fix the pose.

Preprocessor

Notice the ghost image between the reference image and ControlNet?

ControlNets cannot use the reference image directly. A preprocessor converts the reference image into a control image that a ControlNet can use.

Different ControlNets require different preprocessors.

  • The Openpose ControlNet requires the Openpose preprocessor.
  • The Canny ControlNet requires the Canny preprocessor.

Let’s compare the depth (left) and OpenPose (right) control images below. The ControlNet models use them to steer the image generation.

What’s the difference? Depth ControlNet imposes constraints using the Depth control image. The general outline of the girl is followed. The Openpose ControlNet only directs the human pose – How the character stands, how her limbs are placed, etc.

ComfyUI ControlNet workflows

In this section, I will share the versatile ControlNet Depth workflows for copying characters and scenes. You will need to use different workflows for SD 1.5, SDXL, and the Flux image models.

ControlNet SD 1.5

The workflow uses a Stable Diffusion 1.5 model with ControlNet.

Step 1: Load the workflow

Download the workflow below.

Drag and drop it to ComfyUI.

Step 2: Install models

Step 3: Install missing nodes

If you see nodes with red borders, you don’t have the custom nodes required for this workflow. You should have ComfyUI Manager installed before performing this step.

Click Manager > Install Missing Custom Nodes.

Install the nodes that are missing.

Restart ComfyUI.

Refresh the ComfyUI page.

Step 4: Upload the reference image.

Upload the reference image to the Load Image node.

Feel free to use the test image below.

Step 5: Revise the prompt

Revise the prompt to describe what you want to generate.

Step 6: Generate an image

Click the Run button to run the workflow.

Change the seed value in the KSampler node to generate a new image.

ControlNet SDXL

The workflow uses a Stable Diffusion XL model with ControlNet. The SDXL model offers a larger image size (1024×1024 native) and higher image quality.

Step 1: Load the workflow

Download the workflow below.

Drag and drop it to ComfyUI.

Step 2: Install models

Step 3: Install missing nodes

If you see nodes with red borders, you don’t have the custom nodes required for this workflow. You should have ComfyUI Manager installed before performing this step.

Click Manager > Install Missing Custom Nodes.

Install the nodes that are missing.

Restart ComfyUI.

Refresh the ComfyUI page.

Step 4: Upload the reference image.

Upload the reference image to the Load Image node.

Feel free to use the test image below.

Step 5: Revise the prompt

Revise the prompt to describe what you want to generate.

Step 6: Generate an image

Click the Run button to run the workflow.

Change the seed value in the KSampler node to generate a new image.

ControlNet Flux

The workflow uses the Flux model, which has higher image quality than the SD 1.5 and SDXL models.

We will use the Flux ControlNet Union Pro Model for depth conditioning. However, the model can be used for other types of conditioning, such as Openpose and line art.

Step 1: Load the workflow

Download the workflow below.

Drag and drop it to ComfyUI.

Step 2: Install models

  • Download the Flux.1 Dev model flux1-dev-fp8.safetensors. Put the model file in the folder ComfyUI > models > checkpoints.

Step 3: Install missing nodes

If you see nodes with red borders, you don’t have the custom nodes required for this workflow. You should have ComfyUI Manager installed before performing this step.

Click Manager > Install Missing Custom Nodes.

Install the nodes that are missing.

Restart ComfyUI.

Refresh the ComfyUI page.

Step 4: Upload the reference image.

Upload the reference image to the Load Image node.

Feel free to use the test image below.

Step 5: Revise the prompt

Revise the prompt to describe what you want to generate.

Step 6: Generate an image

Click the Run button to run the workflow.

Change the seed value in the KSampler node to generate a new image.

Advanced settings

Adjust the strength parameter in the Controlnet node to adjust influence of the depth ControlNet.

Flux ControlNet.
SD 1.5 or SDXL ControlNet.

See the effect of changing the strength value below. Increases the value makes the image to follow the reference image more. (0.5 to 0.7) But you don’t want to set it too hgih (0.9). You could overcook the image and make them look unnatural.

Andrew

By Andrew

Andrew is an experienced software engineer with a specialization in Machine Learning and Artificial Intelligence. He is passionate about programming, art, and education. He has a doctorate degree in engineering.

1 comment

  1. Thanks, Andrew.
    In the flux workflow, what’s the difference between the nodes Set Shakker Labs Union ControlNet Type and SetUnionControlNetType? I see you have the first of these connected and it works with the ComfyUI_ControlNet_Aux OpenPose node whereas the second one doesn’t seem to work with that node.

Leave a comment

Your email address will not be published. Required fields are marked *