Forum Replies Created
-
AuthorPosts
-
Hi! I don’t recall seeing this artifact. You can try the new soft inpainting feature in A1111. Increase mask blur to 20 or more. You can use high denoising strength.
(tutorial coming soon)
- This reply was modified 10 months ago by Andrew.
The first image can be from an input image. But animatediff will change it big time.
It is not possible to insert an image during the video generation.
I will add this to my list.
Hi! This list is beyond my resources to collect and maintain. Perhaps one approach is to only use a few of them so that it is easier to understand and monitor.
No worries!
Yes, it is on my list!
Welcome William!
Welcome!
Hi! The A1111 software has some memory leak. The notebook just launches so there’s little I can do.
The newer version has just come out. I will update the notebook after it stabilizes. You can see if it works better.
You should be able to train LoRA locally with 12GB VRAM. A popular software is Kohya_ss.
You can train textual inversion in A1111. But people generally prefer LoRA over textual inversion,
Using multiple GPU is possible but likely require additional setup. I am not sure if Kohya_ss has native support.
When buying a new GPU, I would advise prioritizing high VRAM because it will save you a lot of trouble in making things work.
I have overlooked this topic. Adding it to my list… thanks!
Python 3.10 is good. You are using ComfyUI if I understand correctly.
Please post the full log and the workflow you are running.
Hi, this shouldn’t happen so perhaps there’s something wrong with your installation. Note that the python it is using is in venv folder, not the one in the host.
What gui are you using?
February 25, 2024 at 9:00 am in reply to: RuntimeError: Expected all tensors to be on the same device #12638Hi, I tested the notebook with T4 and it seems to work fine. I noticed AnimateDiff is finicky (a known issue).
This is what I did:
- Run notebook with AnimateDiff and ControlNet extensions only
-
Select Dreamshaper 8 model (SD 1.5)
-
Enter prompt (“a beautiful girl”)
-
Upload a video in AnimateDiff (video I used)
-
Enable AnimateDiff
-
Enable ContorlNet
-
Select Openpose Full in controlnet
-
Generate (Didn’t work. Error message but not the same as yours)
-
Disable and enable AnimateDiff
-
Generate (Worked)
So you can try toggling AnimateDiff to see if that resolves the issue.
Welcome Jesus! Looks like you have a lot of experience. You can start with training a LoRA and choose a token that is already close to what you want.
Yes, you can share models between them. See instructions for setting for the config file in ComfyUI
- This reply was modified 10 months, 3 weeks ago by Andrew.
-
AuthorPosts