Tagged: Checkpoints, Comfyui, sdxl
- This topic has 2 replies, 2 voices, and was last updated 5 days, 2 hours ago by
Zandebar.
-
AuthorPosts
-
-
February 21, 2025 at 7:38 am #17287
Hello
SD UI – Comfy with SDXL Checkpoints
I’m running into issues with the Checkpoint and the resulting image:
My issue is that I’m prompting for an outside scene and using all the words for an outside image, I’m not being obscure, am just wanting an outside scene. I’ve being running batches locally, batch size 4 and batch count 25 and these images when completed. Return around 47-53 % indoor images like a photographic backdrop, have lighting equipment in shot, inside against a wall or just an indoor scene.
I don’t think it’s my prompt as I’ve laced enough outside elements into the prompt, I’ve decided whilst experimenting to stick to one checkpoint and explore that. I’m using Juggernaut XL > juggernautXL_juggXIByRundiffusion.safetensors as it’s a quality checkpoint from a good source and documentation.
I’ve even crafted my checkpoint in how Juggernaut XL expects the checkpoint layout to be, with:
https://learn.rundiffusion.com/prompt-guide-for-juggernaut-xi-and-xii/
https://learn.rundiffusion.com/prompting-guide-for-juggernaut-x/I’ve even asked ChatGTP, Gemini and locally Llama 3.2 with the prompt would this produce an indoor image or an outside image. All three returned saying that the prompt would return an outside image. I’ve even asked for extra words to clearly and define an outside image, so I injected, open air, exterior shot and exclude any interior architecture (within the prompt flow in section type), into the prompt to steer it away from an inside image, I’ve also adapted the negative prompt as well.
I don’t know but I’ve worked hard on the prompt and I don’t think that’s the issue. Which brings me to something which you mentioned in the V1111 courses, that the checkpoint data and the limitations of reference images. I’m wondering if I’m hitting that wall, it’s the checkpoint and that’s it’s limitation, I’m getting around 50% of the images outside. You can think OK that’s good, just disregard the inside ones. But surly as I’m asking for something simple like outside, I should hit more of 80-90% 0r even 100% of images. This also falls into the issue of the prompt not always being followed.
I’m not sure what exactly is going on and I might be experiencing the joys of AI, if that’s the reason then I’ll just have to accept it. On the other hand if there’s anything more I can do to improve the output I would love to know. On one of the forums it was suggested that I use an outdoor LoRA, I looked and couldn’t find one that would fit.
Are there any adaptations I could use to increase the percentage output of outside images?
King Regards
-
February 21, 2025 at 5:35 pm #17402
Not easy to coment on the specifics without an example, but you can:
- Test the same prompt on the SDXL 1.0 base model. It is well trained and can give you an ideal whether the juggernaut model has training issues.
- Whether it is due to the association effect https://stable-diffusion-art.com/courses/stable-diffusion-level-2/lessons/association-effect/
- Use negative prompt to supresss anything and settings you don’t want to see.
-
March 4, 2025 at 8:10 am #17508
Thank You for the tip, I did just that and that improved the output.
I finally found out what was causing the issue, I’m using 2 LoRA’s in the workflow and by, bypassing these Lora’s improved the output of the images generated to being outside, this improved to a point where I’m only getting 2% not following the prompt.
1 issue fixed, 3 more created, LOL! 😉
-
-
AuthorPosts
- You must be logged in to reply to this topic.