3 methods to generate consistent face with Stable Diffusion

Updated Categorized as Tutorial Tagged , 11 Comments on 3 methods to generate consistent face with Stable Diffusion
Generate consistent face in Stable Diffusion

Are you looking for ways to generate consistent faces across multiple images with Stable Diffusion? You may be working on illustrations of a storybook or a comic strip. In this post, you will find 3 methods to generate consistent faces.

  • Multiple celebrity names
  • The Roop extension
  • Dreambooth

Software

We will use AUTOMATIC1111 Stable Diffusion GUI. You can use this GUI on WindowsMac, or Google Colab.

Check out the Quick Start Guide if you are new to Stable Diffusion.

Multiple celebrity names

Using celebrity names is a sure way to generate consistent faces. Let’s study the following base prompt, which generates a generic face.

Base prompt:

photo of young woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores

We will use the same negative prompt for the rest of this article.

disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w

They are nice faces, but they are different. There are occasions when you want to generate the same face across multiple images.

As we have studied in the prompt guide, celebrity name is a powerful effect. Using them is a proven way to generate consistent faces.

Let’s add a strong name in Stable Diffusion — Emma Waston, to the prompt.

Emma Watson, photo of young woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores

We get Emma in all images.

But what if you don’t want images of any recognizable face? You just want a generic face in multiple images. There’s a technique for that. You can use multiple celebrity names to blend their faces into a single, consistent face.

Let’s use these three names: Emma Watson, Tara Reid, and Ana de Armas. Stable Diffusion will take all 3 faces and blend them together to form a new face.

Emma Watson, Tara Reid, Ana de Armas, photo of young woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores

That’s good. The face is consistent across these images. But why do they look so… Emma? The reason is Emma Watson is a very strong keyword in Stable Diffusion. You have to dial her down using a keyword weight. In AUTOMATIC1111, you use the syntax (keyword: weight) to apply a weight to a keyword.

Adjusting the weights of each name allows you to dial in the facial feature. We arrive at the prompt:

(Emma Watson:0.5), (Tara Reid:0.9), (Ana de Armas:1.2), photo of young woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores

See this face repeating across the images!

Use multiple celebrity names and keyword weights to carefully tune the facial feature you want. You can also use celebrity names in the negative prompt to avoid facial features you DON’T want.

Experiment with multiple celebrity LoRAs to achieve the same.

Roop

AUTOMATIC1111’s Roop extension lets you copy a face from a reference photo to images generated with Stable Diffusion. The standalone roop program can do that for videos. Only transforming images is supported in the AUTOMATIC1111 extension.

Installing the Roop extension

Google Colab

Installing the Roop extension on our Stable Diffusion Colab notebook is easy. All you need to do is to select the Roop extension.

Windows or Mac

Follow these steps to install the Roop extension in AUTOMATIC1111.

  1. Start AUTOMATIC1111 Web-UI normally.

2. Navigate to the Extension Page.

3. Click the Install from URL tab.

4. Enter the following URL in the URL for extension’s git repository field.

https://github.com/s0md3v/sd-webui-roop

5. Wait for the confirmation message that the installation is complete.

6. Restart AUTOMATIC1111.

Generating new images with Roop

We will use text-to-image to generate new images.

photo of young woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores

disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w

Model: Realistic Vision 2.0

Reference image:

Restore face: None

Upscaler: None

Here are the results. Now you get the same face from all images!

Sometimes using a real photo as the face reference is not desirable. You can use an AI image as the reference instead.

Reference face (AI image)

Generated with Roop:

Sharpening faces

You may have noticed the faces, while changed, were a little blurry. There are 3 ways to produce sharper images.

  1. Use a high-resolution reference image.
  2. Use Face restoration.
  3. Follow with another round of img2img.
  4. Use dreambooth to create a new model. (See next section)

Use face restoration to sharpen faces

You can make the face sharper by enabling face restoration.

  • Restore face: CodeFormer
  • Restore visibility: 0.5

Face restoration could alter the style of the face, making it look artificial. You want to apply the lowest restore visibility you can get away with.

Now we get a sharper face:

Roop with face restoration.

Sharpen face with an additional round of img2img

An alternative method is to use img2img. Once you have a face-swap image generated with Roop, send the resulting image to img2img using the Send to img2img button under the image canvas.

On the img2img page, set the denoising strength to 0.1. Keep other settings unchanged and press Generate.

Now you get a sharper face:

Train your own model with Dreambooth

Perhaps the most reliable way to generate the same face is to use Dreambooth to create your own Stable Diffusion model.

Dreambooth is a technique to create a new Stable Diffusion checkpoint model with your own subject or style. In this case, the subject would be the person with your desired face.

Follow this link to find a step-by-step tutorial. You will need a few images of the person.

Gathering the training images could be a challenge. Here are a few options.

  1. Ask someone you know for permission to use his/her photos.
  2. Take some selfies.
  3. Use the multiple celebrity name method above to generate training images.
  4. Use the Roop method above to generate training images.

We will use Roop to generate the training images.

Step 1: Generate training images with Roop

Follow the instructions from the previous section to generate 8 to 15 images with the same face using Roop. Below are two examples of the training images. It’s fine to use blurry images.

Training image #1 (from Roop)
Training image #2 (from Roop)

Step 2: Train a new checkpoint model with Dreambooth

Follow the Dreambooth tutorial and download the Dreambooth training Colab notebook.

Since we want to train a model with a realistic style, we will use Realistic Vision v2.

MODEL_NAME:

SG161222/Realistic_Vision_V2.0

BRANCH:

main

Your new girl will be called zwx, a rare but existing token in Stable Diffusion. Since zwx is a woman, the instance prompt is

photo of zwx woman

The class is the category zwx belongs to, which is woman. So the class prompt is

photo of woman

By defining the class prompt correctly, you take advantage of all the prior attributes of women in the model and apply them to your girl.

Press the Play button to start training.

Upload the training image when prompted.

It would take some time. If everything goes well, the new model file will be saved to the designated output file name.

Step 3: Using the model

You can conveniently test your new model using the AUTOMATIC1111 Colab notebook. The dreambooth model is available to load if you don’t change the default paths of both notebooks.

Select your new dreambooth model in AUTOMATIC1111’s checkpoint dropdown menu.

Now test with a prompt with your girl’s name zwx:

photo of young zwx woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores

Now you get a consistent and sharp face every time you use the keyword zwx!

Consistent face with Dreambooth image #1
Consistent face with Dreambooth image #2
Consistent face with Dreambooth image #3

You can also generate this person in a different style.

oil painting of zwx young woman, highlight hair

disfigured, ugly, bad, immature, b&w, frame

Remark

Now you know how to use Dreambooth to generate consistent faces. If you like the result but don’t like the large file size, train a LoRA model instead. The file size is a lot smaller.


If you find the content helpful, please support this site by becoming a member.

Buy Me A Coffee

By Andrew

Andrew is an experienced engineer with a specialization in Machine Learning and Artificial Intelligence. He is passionate about programming, art, photography, and education. He possesses a Ph.D. in engineering.

11 comments

  1. How about consistent wardrobes and backgrounds? What tools, techniques would you use to have the same model, using the same outfit in the same room but different poses?

  2. Any advice on getting Roop to run. Have added the extension but getting errors relating to failures building insightface, Package ‘insightface.thirdparty.face3d.mesh.cython’ is absent from the `packages` configuration etc.

  3. Dunno if you’ve tried this, but I thought it was an interesting find that I’ve been using a fair bit;

    For the prompt method, you use completely fictional names and generate consistent faces. Of course, it’s a bit hit and miss til you find one you like, but it works! Similarly, you can use numbers…. so you can put a ‘seed’ and a parameter value in brackets, or you can use the blend method to blend two seeds for a consistent face. Again, somewhat hit and miss until you find a combination you like – but the advantage of both is that they don’t have to look like some known celebrity. You can obviously further guide the age and look, but I find it works consistently enough to be useful!

    So, for example, I use prompts like: “40 year old woman, [99576:12345:0.5]”, and get the same pretty brunette fairly consistently. If I’m using a fictional name, I’ll often use a middle name to make it more distinct. e.g, “40 year old woman, (Michelle Alice Bullock:1.3)” gives me the same dark skinned woman nearly every time. You can put the same formula in Adetailer prompt, too.

  4. Wow, that was the answer I was struggling with for a long time. Thanks so much. Not so convinced by the blurry Roop output though…

Leave a comment

Your email address will not be published. Required fields are marked *