Forum Replies Created
-
AuthorPosts
-
Welcome William!
Welcome!
Hi! The A1111 software has some memory leak. The notebook just launches so there’s little I can do.
The newer version has just come out. I will update the notebook after it stabilizes. You can see if it works better.
You should be able to train LoRA locally with 12GB VRAM. A popular software is Kohya_ss.
You can train textual inversion in A1111. But people generally prefer LoRA over textual inversion,
Using multiple GPU is possible but likely require additional setup. I am not sure if Kohya_ss has native support.
When buying a new GPU, I would advise prioritizing high VRAM because it will save you a lot of trouble in making things work.
I have overlooked this topic. Adding it to my list… thanks!
Python 3.10 is good. You are using ComfyUI if I understand correctly.
Please post the full log and the workflow you are running.
Hi, this shouldn’t happen so perhaps there’s something wrong with your installation. Note that the python it is using is in venv folder, not the one in the host.
What gui are you using?
February 25, 2024 at 9:00 am in reply to: RuntimeError: Expected all tensors to be on the same device #12638Hi, I tested the notebook with T4 and it seems to work fine. I noticed AnimateDiff is finicky (a known issue).
This is what I did:
- Run notebook with AnimateDiff and ControlNet extensions only
-
Select Dreamshaper 8 model (SD 1.5)
-
Enter prompt (“a beautiful girl”)
-
Upload a video in AnimateDiff (video I used)
-
Enable AnimateDiff
-
Enable ContorlNet
-
Select Openpose Full in controlnet
-
Generate (Didn’t work. Error message but not the same as yours)
-
Disable and enable AnimateDiff
-
Generate (Worked)
So you can try toggling AnimateDiff to see if that resolves the issue.
Welcome Jesus! Looks like you have a lot of experience. You can start with training a LoRA and choose a token that is already close to what you want.
Yes, you can share models between them. See instructions for setting for the config file in ComfyUI
-
This reply was modified 1 year ago by
Andrew.
You can ask them if ipadapter is supported. If you don’t have access to their environment, they should have InsightFace installed for you. You can point them to my article for installation.
Hi Rogier, please reach out to the thinkdiffusion folk’s support team. They should be able to correct the issue.
On a local installation, you must create the ipadapter folder AND restart ComfyUI to take effect. You can see if restarting the server works for you.
You can ignore the CmfyUI_windows_portable folder if you are on Linux. That only applies to a Windows installation with the standalone installer.
Hi Arthur,
I think what you see is the correct behavior.
- The workflow uses a high denoising strength so it would change faces. Try not to mask the faces.
-
I should have mentioned that you should not use an inpainting model. It is not compatible with the workflow.
OK, let me take a look and download models from them when I got a chance. Thanks for the heads up!
Hi, the notebook in the quick start guide is for using Stable Diffusion Webui (A1111)
You can access the training notebooks and images on the resources page.
https://stable-diffusion-art.com/members-resources/
For training SD 1.5 models, see the section: “Train LoRA for Stable Diffusion 1.5”
For training SDXL models, see the section: “Train LoRA for Stable Diffusion XL”
Please let me know if you have any questions!
-
AuthorPosts