This workflow stylizes a dance video with the following techniques:
- IP-adapter for consistent character.
- Multiple ControlNets for consistent frame-to-frame motion.
- AnimateDiff for frame-to-frame consistency.
- LCM LoRA for speeding up video generation by 3 times.
- Detailer (ComfyUI’s ADetailer) to fix face (with AnimateDiff for consistency).
This is an updated version of this consistent character video-to-video workflow.
GIF version (reduced quality):
You will learn/get:
- A downloadable ComfyUI workflow that generates this video.
- Customization options.
- Notes on building this workflow.
Hi, huge thanks for the workflow sharing, I succeed run this workflow, but further more, I want to apply a girl’s dancing video to a minion but the output keeps show a girl, I’m thinking would this be possible of I need to change some VAE or checkpoint or LoRA ? Thanks again for sharing.
You can try adding a minion lora and changing the prompt accordingly. One challenge is the controlnet openpose is trained with human. you can experiment with different type of controlnet to see if you can render a minion.
Hi, I am having issues with installing missing custom nodes. When I use the
Manager, I still miss the following three:
(IMPORT FAILED)ComfyUI-Advanced-ControlNet
(IMPORT FAILED)AnimateDiff Evolved
(IMPORT FAILED) ComfyUI-VideoHelperSuite
Any hints? I am on Ubuntu and managed to install ComfyUI thru git and all the packages.
Best,
I managed to install those custom nodes (upgrade python to 3.9). But now I cannot load IPadapterApply node and SD1.5 clip_vision model is not loading. ???
Hi, the ip adapter node has been undergoing changes. I will update the workflow when it stabilizes.
For now, checkout a previous version should work.
git checkout 6a411dcb2c6c3b91a3aac97adfb080a77ade7d38
Thanks for the reply. As I am not that fluent with git, would you be more specific? It does not seem that 6a411dcb… is ComfyUI branch so I
‘git log’
and tracked a branch committed close to this article was written:
‘git checkout d0165d819afe76bd4e6bdd710eb5f3e571b6a804’
Now when I start this older version of ComfyUI, all the custom nodes seem to have issues. (They appear red when I install them via Manager, for example.)
Did I totally misunderstand your suggestion?
Also, is there a python code version of this article? I tried to use
‘ComfyUI-To_Python-Extension’ but it didn’t seem to work unless ComfyUI json files loads correctly.
Best,
I don’t have experience in converting a workflow to a python script. But it should be possible if you are a coder since all the comfyui codes are in python.
The commit checkout is for the IP-adapter custom node. sorry for not being clear!
Thanks Andrew! Now things load ok but somehow I don’t see any generated video. Outputed somewhere? I followed thru Step 5 and I don’t see any error messages on the terminal. Two Preview Image windows show pose detection results but the last one is blank. I set the frame_load_cap to 16-48. My apologies for newbie questions.
You should see controlnet results in both previews. One should be openpose stick figures and the other soft edges.
You can check if the two preprocessing was successful in the terminal.
Do you see the series of images generated? If so, you may need to fix your ffmpeg. You are supposed to be able to run it anywhere from terminal.
It works now. The change I had to make was to put ‘SD 1.5 CLIP vision’ model to ‘ComfyUI > models > clip_vision’ instead of
‘ComfyUI > models > ipadapter’ folder.
Thanks!
yes it should be clip_vision. thanks for pointing it out!
This is an awesome patch!
Hey all – fantastic workflow. I’m getting an output … but LOTS of errors in the process.
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape ‘[640, 768]’ is invalid for input of size 1310720 [and more like this]
lora key not loaded lora_unet_output_blocks_0_1_transformer_blocks_6_ff_net_0_proj.lora_up.weight
[LOTS more like this]
I’ve gone through the tutorial again and can’t see anything I’ve missed. Any thoughts?
Ignore me!! I’d loaded the wrong lora into the add_detail slot. Oops!
Perfect description. Thank you also put all the downloads into.
Having an issue using Think Diffusion. It will only allow me to upload 2gb per file and suggests I use the URL submit instead. Where do I find those URLs for the models larger than 2gb?
You can right-click the download button and copy the link.