Forum Replies Created
-
AuthorPosts
-
Thanks for the compliment! Great idea. I will add a feedback section when I got a chance.
Welcome! It’s great to have an expert like you here!
Hi! I updated the notebook today (3/21). Give it a try.
I’m not aware of such controlnet model.
The SDXL canny controlnet works pretty well.
You use SD1.5 segmentation controlnet to generate something close and use multiple rounds of img2img with controlnet to transform the image to an SDXL style.
Hi, this sounds like an issue with SD Forge. Please file an issue in their repository.
Yes, that’s all it takes.
The E2E workflow should be accessible now.
Thanks for reporting the issue!
The issue is fixed. Thanks!
Thanks for reporting issue! Can you send me the link to the quiz?
Hi! I don’t recall seeing this artifact. You can try the new soft inpainting feature in A1111. Increase mask blur to 20 or more. You can use high denoising strength.
(tutorial coming soon)
-
This reply was modified 1 year ago by
Andrew.
The first image can be from an input image. But animatediff will change it big time.
It is not possible to insert an image during the video generation.
I will add this to my list.
Hi! This list is beyond my resources to collect and maintain. Perhaps one approach is to only use a few of them so that it is easier to understand and monitor.
No worries!
Yes, it is on my list!
Welcome William!
-
This reply was modified 1 year ago by
-
AuthorPosts