Tagged: dreambooth, lora
- This topic has 8 replies, 2 voices, and was last updated 8 months, 3 weeks ago by Andrew.
-
AuthorPosts
-
-
March 28, 2024 at 2:06 pm #13185
Just joined the community and excited to learn and contribute.
I’ve been learning SD for the the last 2.5 weeks. My first self assignment 😂 is to generate ultra realistic portraits of myself.
I’m trying to understand the best method to use (with Google Colab training). My factors are time to train, actually looking like me, and quality of image.
So far I’ve tried InstantID (looked bad), and Dreambooth (auto train) and the output LoRa looks nothing like me.
What do you recommend trying?
-
March 29, 2024 at 7:34 am #13201
Welcome!
Dreambooth is the most powerful method, so it should work.
You can try
- Get better input pictures with quality similar to my training example images.
-
If your face doesn’t show, train more steps.
-
March 30, 2024 at 1:41 pm #13209
Great dreambooth guide! It worked well.
It looks ok when generating pictures. I’d like to tap into the power of SDXL. Any idea when the Dreambooth SDXL notebook will be ready?
Also other suggestions to train to get more realistic images (like a different model)? My goal is to make high quality Vogue/ Fashion magazine level images lol.
-
March 31, 2024 at 2:50 pm #13221
Great!
Dreambooth on SDXL consumes too much memory. I can’t find a way around it on Colab. You can try training a LoRA on SDXL.
Realistic Vision is a good starting point for what you want to achieve.
-
-
March 31, 2024 at 11:13 pm #13222
Thanks Andrew! I’ll try the LoRA on SDXL.
Now I’m getting an error from all the Dreambooth models that I produced. It worked in the Colab to generate an image, but it fails in A111. I can see the image starts to generate but fails in the last steps. I tried adding “–no-half-vae” that didn’t work either:
”
*** Error completing request
*** Arguments: (‘task(ij6bkjfl5ur5iu8)’, <gradio.routes.Request object at 0x79afaa619390>, ‘photo of arnab in a coffee shop’, ”, [], 20, ‘Euler a’, 1, 1, 7, 512, 512, False, 0.7, 2, ‘Latent’, 0, 0, 0, ‘Use same checkpoint’, ‘Use same sampler’, ”, ”, [], 0, False, ”, 0.8, -1, False, -1, 0, 0, 0, False, False, ‘positive’, ‘comma’, 0, False, False, ‘start’, ”, 1, ”, [], 0, ”, [], 0, ”, [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File “/content/stable-diffusion-webui/modules/call_queue.py”, line 57, in f
res = list(func(*args, **kwargs))
File “/content/stable-diffusion-webui/modules/call_queue.py”, line 36, in f
res = func(*args, **kwargs)
File “/content/stable-diffusion-webui/modules/txt2img.py”, line 110, in txt2img
processed = processing.process_images(p)
File “/content/stable-diffusion-webui/modules/processing.py”, line 785, in process_images
res = process_images_inner(p)
File “/content/stable-diffusion-webui/modules/processing.py”, line 933, in process_images_inner
x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
File “/content/stable-diffusion-webui/modules/processing.py”, line 653, in decode_latent_batch
raise e
File “/content/stable-diffusion-webui/modules/processing.py”, line 637, in decode_latent_batch
devices.test_for_nans(sample, “vae”)
File “/content/stable-diffusion-webui/modules/devices.py”, line 255, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there’s not enough precision to represent the picture. Try adding –no-half-vae commandline argument to fix this. Use –disable-nan-check commandline argument to disable this check.
”
(Also i think the community would hugely benefit from a Discord channel)
-
April 1, 2024 at 11:55 am #13225
It seems to only happen when training with realistic vision, not in SD v1.5
-
April 2, 2024 at 12:42 am #13226
Hi, it seems that your trained model is not stable. You can:
- Train from SD 1.5 if possible. The model is in a better state compared to realistic vision.
- Try reducing the number of training steps and/or learning rate. Use the least you can get away with.
-
-
April 3, 2024 at 1:04 am #13228
Appreciate the help.
Ok so it seems the SDXL Lora seems to work the best and look the best in image generation (Juggernaut XL).
The close up pictures look great, but if I generate pictures slightly further away my face doesn’t resemble me. Ideas?
-
April 5, 2024 at 6:55 am #13231
You can try the following to see which one works better
- Add training images with your face in a similar size (pixels)
- Use inpainting (masked only) at 0.5 or so to redraw the face at a higher resolution.
-
-
AuthorPosts
- You must be logged in to reply to this topic.