You can now use AI to generate short video clips with narration. This workflow combines Flux AI and RunwayML’s Gen3-Alpha to generate a virtual Youtuber with narration.
You will need a RunwayML subscription to use Gen3-Alpha. The Standard plan will do.
Here’s how:
Glambase + Kling is all you need
Hey Andrew,
do you recommend flux or stable diffusion as base checkpoint model for realistic and detailed real life characters?
Flux would work better for realism.
Bad lip sync too, wow. I bought the membership to Runway for this? LOL
I could have done this locally, or even better, with better tools, but I think this is great, if not lacking in ways to fix. My video came out like a ninja movie lip sync lol
Yeah, I wish they train a proper lip sync model like Loopy https://loopyavatar.github.io/
when loopy ai release????
Not sure but you can try infinity ai 😉 https://studio.infinity.ai/
FLUX not working in M1 Max Mac, at all. Forge with FLUX Checkpoint just says
RuntimeError: linear(): input and weight.T shapes cannot be multiplied (4032×64 and 1×98304)
Thanks!
You can follow this guide for flux on Mac https://stable-diffusion-art.com/flux-mac/