AnimateDiff Morph Art Style Portrait Video (ComfyUI)

Updated Categorized as Workflow Tagged , , , , No Comments on AnimateDiff Morph Art Style Portrait Video (ComfyUI)

This workflow generates a morphing portrait video across four different art styles, like the one below. The styles are controlled by text prompts. You can fine-tune each style by changing its prompt, and the transition pattern can be adjusted.

You will need to be a member of this site to download the ComfyUI workflow.

Software

Stable Diffusion GUI

We will use ComfyUI, a node-based Stable Diffusion GUI. You can use ComfyUI on Window/Mac or Google Colab.

Check out Think Diffusion for a fully managed ComfyUI/A1111/Forge online service. They offer 20% extra credits to our readers. (and a small commission to support this site if you sign up)

See the beginner’s guide for ComfyUI if you haven’t used it.

Use the L4 runtime type to speed up the generation if you use my Google Colab notebook.

How this workflow works

Overview

The morphing video is created using AnimateDiff for frame-to-frame consistency. This workflow uses four reference images, each injected into a quarter of the video. Each of them is independently generated by an SDXL model. You can change the prompts to change the images.

Using the SDXL base model is important because a fine-tuned model typically loses the ability to generate art styles.

Image injection

IP adapter is used to inject these images into the video generation process. Each image is injected with a mask over frames so that it only affects part of the video.

Dynamic pattern

If you watch the video carefully, you will see an outward motion originating at the center of the video. This is done by injecting a QR ControlNet into the video frames.

Post-processing options

After generating the video with Stable Diffusion, you can optionally (All nodes are in the workflow)

  • Upscale the video to a higher resolution.
  • Make the video smoother by interpolating the frames.
  • Add audio to the video.
  • Correct color balance.

Step-by-step guide

Step 1: Load the ComfyUI workflow

Download the workflow JSON file below. You will need to be a member and log in to download the workflow.

Become a member of this site to see this content

Already a member? Log in here.

Drag and drop it to ComfyUI to load.

Step 2: Install Missing nodes

You may see a few red nodes in the workflow. That means you are missing some custom nodes needed for this workflow.

First, install ComfyUI manager if you haven’t already.

Click the Manager button on the top bar.

In the popup menu, click Install Missing Custom Nodes. Install the missing custom nodes on the list.

Restart ComfyUI. Refresh the ComfyUI page.

If you still see red nodes, try Update All in the ComfyUI manager’s menu.

Step 3: Download models

The following models are needed for this workflow.

Checkpoint model

Download the Juggernaut Reborn (SD 1.5) model. Put it in ComfyUI > models > checkpoints.

Refresh and select the model in the Load Checkpoint node in the Settings group.

(If you use my Colab notebook: AI_PICS > models > Stable-Diffusion )

Download the SDXL base model. Put it in ComfyUI > models > checkpoints.

Refresh and select the model in the Load Checkpoint node in the Images group.

IP adapter

This workflow uses the IP-adapter to achieve a consistent face and clothing.

Download the SD 1.5 IP adapter Plus model. Put it in ComfyUI > models > ipadapter.

Download the SD 1.5 CLIP vision model. Put it in ComfyUI > models > clip_vision. Rename it to CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors to conform to the custom node’s naming convention.

These two models are needed for the IPAdapter Unified Loader node.

ControlNet

Download the QR Code Monster ControlNet model. Put it in ComfyUI > models > controlnet.

Refresh and select the models in the Load Advanced ControlNet Model node in the QRCode ControlNet group.

AnimateDiff

Download the AnimateDiff MM-Stabilized High model. Put it in ComfyUI > models > animatediff_models.

Refresh and select the model.

Upscaler

Download the 4x-Ultrasharp upscaler model. Put it in ComfyUI > models > upscale_models.

(If you use my Colab notebook: AI_PICS > models > ESRGAN )

Step 4: Generate video

Press Queue Prompt to start generating the video.

If you see an out-of-memory error, you can add the extra argument --disable-smart-memory to run_nvidia_gpu.bat.

.python_embededpython.exe -s ComfyUImain.py --windows-standalone-build --disable-smart-memory

Customization

Partial generation control

You can work with refining the partial workflow by using the Fast group muter.

Disable the AnimateDiff group when you refine the prompts.

Disable the Upscale group when you work on the initial video, e.g. selecting a good seed.

Balancing static images and transition

The video is in a dedicated balance between the IP-Adapter and the QR Code ControlNet. The IP-adapter wants to show a static image. The QR code ControlNet wants the video to follow the pattern video.

When you change the prompts, you may need to adjust the effect of the QR Code ControlNet by:

  • Increase the strength.
  • Increase the end_precent.

You likely don’t need to touch the IP-adapter values. But you can adjust their effect similar to the QR ControlNet.

Seed

Changing the seed in the Sampler node to generate a different morphing.

Prompts

Change the prompts to customize each image.

Styles

You can change style by changing the prompt and/or the checkpoint models. I used the Juggernaut models to generate both images and videos with AnimateDiff. They are general-purpose models. You should be able to change the style by adjusting the prompt.

Consistent face

The consistence face is enforced by keywords with two celebrity names.

Dynamic pattern

You can change the dynamic pattern by changing the link in the Load Video (Path) node.

Here are a few options.

https://imgur.com/FZojh3v.mp4
https://imgur.com/EHe7cAU.mp4
https://imgur.com/pAcpNXv.mp4
https://imgur.com/aRw6U8r.mp4
https://imgur.com/0eVlPZH.mp4
https://imgur.com/DlcJRtS.mp4

Video size

The video size is set to 432×768 for the native generation.

Change the aspect ratio to match what you want.

You can also reduce the width to as low as 256 px to speed up the generation. However, you will get fewer details.

Video length

The video is set to 96 frames in the following node.

To adjust the video length, you can change the number of frames. Then, you will need to adjust the Create Fade Mask Advanced node accordingly.

Upscaling

The size and models for upscaling are controlled by the following nodes. The upscale model has some effect on the style. You can pick one that works best for your artwork. See the upscaler tutorial for details.

Frame interpolation

You can adjust the frame interpolation settings in the RIFE VFI node. It is set to doubling the frame rate (2) in this workflow.

Color correction

You can use the Color Correct node to correct any color artifacts in the video.

Avatar

By Andrew

Andrew is an experienced engineer with a specialization in Machine Learning and Artificial Intelligence. He is passionate about programming, art, photography, and education. He has a Ph.D. in engineering.

Leave a comment

Your email address will not be published. Required fields are marked *