Arnab Raychaudhuri

Forum Replies Created

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • AvatarArnab Raychaudhuri
    Participant

      Appreciate the help.

      Ok so it seems the SDXL Lora seems to work the best and look the best in image generation (Juggernaut XL).

      The close up pictures look great, but if I generate pictures slightly further away my face doesn’t resemble me. Ideas?

      AvatarArnab Raychaudhuri
      Participant

        It seems to only happen when training with realistic vision, not in SD v1.5

        AvatarArnab Raychaudhuri
        Participant

          Thanks Andrew! I’ll try the LoRA on SDXL.

          Now I’m getting an error from all the Dreambooth models that I produced. It worked in the Colab to generate an image, but it fails in A111. I can see the image starts to generate but fails in the last steps. I tried adding “–no-half-vae” that didn’t work either:


          *** Error completing request
          *** Arguments: (‘task(ij6bkjfl5ur5iu8)’, <gradio.routes.Request object at 0x79afaa619390>, ‘photo of arnab in a coffee shop’, ”, [], 20, ‘Euler a’, 1, 1, 7, 512, 512, False, 0.7, 2, ‘Latent’, 0, 0, 0, ‘Use same checkpoint’, ‘Use same sampler’, ”, ”, [], 0, False, ”, 0.8, -1, False, -1, 0, 0, 0, False, False, ‘positive’, ‘comma’, 0, False, False, ‘start’, ”, 1, ”, [], 0, ”, [], 0, ”, [], True, False, False, False, False, False, False, 0, False) {}
          Traceback (most recent call last):
          File “/content/stable-diffusion-webui/modules/call_queue.py”, line 57, in f
          res = list(func(*args, **kwargs))
          File “/content/stable-diffusion-webui/modules/call_queue.py”, line 36, in f
          res = func(*args, **kwargs)
          File “/content/stable-diffusion-webui/modules/txt2img.py”, line 110, in txt2img
          processed = processing.process_images(p)
          File “/content/stable-diffusion-webui/modules/processing.py”, line 785, in process_images
          res = process_images_inner(p)
          File “/content/stable-diffusion-webui/modules/processing.py”, line 933, in process_images_inner
          x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
          File “/content/stable-diffusion-webui/modules/processing.py”, line 653, in decode_latent_batch
          raise e
          File “/content/stable-diffusion-webui/modules/processing.py”, line 637, in decode_latent_batch
          devices.test_for_nans(sample, “vae”)
          File “/content/stable-diffusion-webui/modules/devices.py”, line 255, in test_for_nans
          raise NansException(message)
          modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there’s not enough precision to represent the picture. Try adding –no-half-vae commandline argument to fix this. Use –disable-nan-check commandline argument to disable this check.


          (Also i think the community would hugely benefit from a Discord channel)

          AvatarArnab Raychaudhuri
          Participant

            Great dreambooth guide! It worked well.

            It looks ok when generating pictures. I’d like to tap into the power of SDXL. Any idea when the Dreambooth SDXL notebook will be ready?

            Also other suggestions to train to get more realistic images (like a different model)? My goal is to make high quality Vogue/ Fashion magazine level images lol.

          Viewing 4 posts - 1 through 4 (of 4 total)