How to create consistent character from different viewing angles

38,693 views
Updated Categorized as Tutorial Tagged , , 15 Comments on How to create consistent character from different viewing angles

Do you ever need to create consistent AI characters from different viewing angles? The method in this article makes a grid of the same character like the one below. You can use them for downstream artwork that requires the same character in multiple images.

consistent character from different viewing angles

Here’s the video version of this tutorial for AUTOMATIC1111.

Video tutorial for ComfyUI.

Software

I will provide instructions on how to create this in AUTOMATIC1111.

I am working on a ComfyUI workflow. Stay tuned!

AUTOMATIC1111

We will use AUTOMATIC1111 , a popular and free Stable Diffusion software. Check out the installation guides on WindowsMac, or Google Colab.

If you are new to Stable Diffusion, check out the Quick Start Guide.

Take the Stable Diffusion course if you want to build solid skills and understanding.

Check out the AUTOMATIC1111 Guide if you are new to AUTOMATIC1111.

ComfyUI

We will use ComfyUI, an alternative to AUTOMATIC1111.

Read the ComfyUI installation guide and ComfyUI beginner’s guide if you are new to ComfyUI.

Take the ComfyUI course to learn ComfyUI step-by-step.

How this workflow works

Checkpoint model

This workflow only works with some SDXL models. It works with the model I will suggest for sure. Switching to using other checkpoint models requires experimentation.

The reason appears to be the training data: It only works well with models that respond well to the keyword “character sheet” in the prompt.

Controlling the grid of viewing angles

With the right model, this technique uses the Canny SDXL ControlNet to copy the outline of a character sheet, like the one below.

The control image is what ControlNet actually uses. When using a new reference image, always inspect the preprocessed control image to ensure the details you want are there.

Copying the face

We will use IP-adapter Face ID Plus v2 to copy the face from another reference image. This IP-adapter model only copies the face. Because it uses Insight Face to exact facial features from the reference image, it can accurately transfer the face to different viewing angles.

Face correction

We must stay with an image size compatible with SDXL to ensure global consistency, for example, 1024×1024 pixels. The challenge is that the faces are too small to be rendered correctly by the model.

We will use automatic inpainting at higher resolution to fix them.

AUTOMATIC1111

This section covers using this workflow in AUTOMATIC1111. A video version is also available.

Software setup

Checkpoint model

We will use the ProtoVision XL model. Download it and put it in the folder stable-diffusion-webui > models > Stable-Diffusion.

Extensions

You will need the ControlNet and ADetailer extensions.

The installation URLs are:

https://github.com/Mikubill/sd-webui-controlnet
https://github.com/Bing-su/adetailer

In AUTOMATIC1111, go to Extensions > Install from URL. Enter an URL above in URL for extension’s git repository. Click the Install button.

Restart AUTOMATIC1111 completely.

Scroll down to the ControlNet section on the txt2img page.

You should see 3 ControlNet Units available (Unit 0, 1, and 2). If not, go to Settings > ControlNet. Set Multi-ControlNet: ControlNet unit number to 3. Restart.

IP-adapter and controlnet models

You will need the following two models.

Download them and put them in the folder stable-diffusion-webui > models > ControlNet.

Step 1: Enter txt2img setting

Go to the txt2img page, enter the following settings.

character sheet, color photo of woman, white background, blonde long hair, beautiful eyes, black shirt

  • Negative prompt:

disfigured, deformed, ugly, text, logo

  • Sampling method: DPM++ 2M Karras
  • Sampling Steps: 20
  • CFG scale: 7
  • Seed: -1
  • Size: 1024×1024

Step 2: Enter ControlNet setting

Scroll down to the ControlNet section on the txt2img page.

ControlNet Unit 0

We will use Canny in ControlNet Unit 0.

Save the following image to your local storage. Upload it to the image canvas under Single Image.

Here are the rest of the settings.

  • Enable: Yes
  • Pixel Perfect: Yes
  • Control Type: Canny
  • Preprocessor: canny
  • Model: diffusers_xl_canny_mid
  • Control Weight: 0.4
  • Starting Control Step: 0
  • Ending Control Step: 0.5

ControlNet Unit 1

We will use ControlNet Unit 1 for copying a face using IP-adapter.

Save the following image to your local storage and upload it to the image canvas of ControlNet Unit 1. You can use any image with a face you want to copy.

Below are the rest of the settings.

  • Enable: Yes
  • Pixel Perfect: No
  • Control Type: IP-Adapter
  • Preprocessor: ip-adapter_face_id_plus (or ip-adapter-auto)
  • Model: ip-adapter-faceid-plusv2_sdxl
  • Control Weight: 0.7
  • Starting Control Step: 0
  • Ending Control Step: 1

It should look like this:

Step 3: Enable ADetailer

We will use ADetailer to fix the face automatically.

Go to the ADetailer section.

Enable ADetailer: Yes.

We will use the default settings.

Step 4: Generate image

Press Generate.

You should get an image like this.

ComfyUI

Software setup

Workflow

Load the following workflow in ComfyUI.

Every time you try to run a new workflow, you may need to do some or all of the following steps.

  1. Install ComfyUI Manager
  2. Install missing nodes
  3. Update everything

Install ComfyUI Manager

Install ComfyUI manager if you haven’t done so already. It provides an easy way to update ComfyUI and install missing nodes.

To install this custom node, go to the custom nodes folder in the PowerShell (Windows) or Terminal (Mac) App:

cd ComfyUI/custom_nodes

Install ComfyUI by cloning the repository under the custom_nodes folder.

git clone https://github.com/ltdrdata/ComfyUI-Manager

Restart ComfyUI completely. You should see a new Manager button appearing on the menu.

If you don’t see the Manager button, check the terminal for error messages. One common issue is GIT not installed. Installing it and repeat the steps should resolve the issue.

Install missing custom nodes

To install the custom nodes that are used by the workflow but you don’t have:

  1. Click Manager in the Menu.
  2. Click Install Missing custom Nodes.
  3. Restart ComfyUI completely.

Update everything

You can use ComfyUI manager to update custom nodes and ComfyUI itself.

  1. Click Manager in the Menu.
  2. Click Updates All. It may take a while to be done.
  3. Restart the ComfyUI and refresh the ComfyUI page.

Checkpoint Model

We will use the ProtoVision XL model. Download it and put it in the folder comfyui > models > checkpoints.

ControlNet model

Download this ControlNet model: diffusers_xl_canny_mid.safetensors

Put it in the folder comfyui > models > controlnet.

IP-adapter models

Download the Face ID Plus v2 model: ip-adapter-faceid-plusv2_sdxl.bin. Put it in the folder comfyui > models > ipadapter. (Create the folder if you don’t see it)

Download the Face ID Plus v2 LoRA model: ip-adapter-faceid-plusv2_sdxl_lora.safetensors. Put it in the folder comfyui > models > loras.

Step 1: Select checkpoint model

In the Load Checkpoint node, select the ProtoVision XL model.

Step 2: Upload reference image for Controlnet

Download the following image.

Upload it to the ControlNet Canny preprocessor.

Step 3: Upload the IP-adapter image

Download the following image.

Upload it to the IP-adapter’s Load Image node.

Step 4: Generate image

Press Queue Prompt.

You should get two output images with consistent faces. The face-fixed images is on the right.

Tips

When you work on the prompt, mute (Ctrl-M) the FaceDetailer node to speed up the process. Once you are happy with the prompt, unmute it with Ctrl-M.

Customization

The image can be customized by the prompt.

character sheet, color photo of woman, white background, long hair, beautiful eyes, black blouse

Troubleshooting

If the face doesn’t look like the image:

  • Increase the control weight of the IP adapter.
  • Lower the control weight and ending control step of the Canny ControlNet.

Make sure the sum of the control weights of the two ControlNets is not too much higher than 1. Otherwise, you may see artifacts.

Avatar

By Andrew

Andrew is an experienced engineer with a specialization in Machine Learning and Artificial Intelligence. He is passionate about programming, art, photography, and education. He has a Ph.D. in engineering.

15 comments

  1. Hello Andrew, great work! I tried to put it work but getting the following errors, do you have any insight on this? Thanks.

    Error occurred when executing IPAdapterUnifiedLoaderFaceID:

    Unable to import dependency onnxruntime.

    File “C:\Users\user\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py”, line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File “C:\Users\user\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py”, line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File “C:\Users\user\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py”, line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File “C:\Users\user\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py”, line 562, in load_models
    self.insightface[‘model’] = insightface_loader(provider)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File “C:\Users\user\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\utils.py”, line 150, in insightface_loader
    raise Exception(e)

  2. hi Andrew,
    great post, but I was wondering if I could automate the work of generating illustrations and creating turntables and walk-cycles for animation through such techniques? Are there any Comfy UI workflows to help with this?

  3. Maybe something to mention (and as soon as I did, I stopped having errors) is that you need to install “insightface” for all this to work, as it is what the IPAdapter uses in the background.
    Maybe basic, but for newbies (like me!) nice to know before getting into this.
    Excelent guide as always!

  4. Hi – Never mind my previous request for help – turned out looking at the logs there was an error occurring from a missing library.

  5. Hey Andrew, thanks for this writeup.
    I’m following the directions, but I’ve run into an issue.
    I’m Using A1111… i’ve looked over every setting….
    Everything works pretty much as shown, BUT, if I enable the IP-Adapter to influence the face, A1111 won’t generate a character sheet – just a single face. If I disable IP-Adapter, it goes back to generating the 9 faces, but they don’t look like the uploaded face, obviously… any thing I can try to troubleshoot?

    Also – one other question – in the ADetailer section you say to just accept the defaults…. which model are you using to fix the faces? I don’t see that mentioned. Are you using “None”?

  6. Hello. Thank you for your great content.
    How can I upscale an image that I generated with the SDXL model? Which ControlNet model should I use? I have also purchased your courses, but there was no mention of upscaling with ControlNet for SDXL in the course.

    1. Hi! You can use upscaling techniques like AI upscaler, img2img or hi res fix for SDXL. I don’t believe a tile controlnet model is available for SDXL. But you can still use tile upscaling (ultimate sd upscale) with an AI upscaler. Just keep the denoising strength low.

      I will add a section for upscaling later.

Leave a comment

Your email address will not be published. Required fields are marked *