Forum Replies Created
-
AuthorPosts
-
Hi! I updated the notebook today (3/21). Give it a try.
I’m not aware of such controlnet model.
The SDXL canny controlnet works pretty well.
You use SD1.5 segmentation controlnet to generate something close and use multiple rounds of img2img with controlnet to transform the image to an SDXL style.
Hi, this sounds like an issue with SD Forge. Please file an issue in their repository.
Yes, that’s all it takes.
The E2E workflow should be accessible now.
Thanks for reporting the issue!
The issue is fixed. Thanks!
Thanks for reporting issue! Can you send me the link to the quiz?
Hi! I don’t recall seeing this artifact. You can try the new soft inpainting feature in A1111. Increase mask blur to 20 or more. You can use high denoising strength.
(tutorial coming soon)
-
This reply was modified 11 months, 1 week ago by
Andrew.
The first image can be from an input image. But animatediff will change it big time.
It is not possible to insert an image during the video generation.
I will add this to my list.
Hi! This list is beyond my resources to collect and maintain. Perhaps one approach is to only use a few of them so that it is easier to understand and monitor.
No worries!
Yes, it is on my list!
Welcome William!
Welcome!
Hi! The A1111 software has some memory leak. The notebook just launches so there’s little I can do.
The newer version has just come out. I will update the notebook after it stabilizes. You can see if it works better.
-
This reply was modified 11 months, 1 week ago by
-
AuthorPosts