Forum Replies Created
-
AuthorPosts
-
Haven’t encountered this before but you can try:
-
Use SDP Attention “–opt-sdp-attention” argument.
-
Forge uses a different backend. Try if you see the same error.
Not easy to coment on the specifics without an example, but you can:
- Test the same prompt on the SDXL 1.0 base model. It is well trained and can give you an ideal whether the juggernaut model has training issues.
- Whether it is due to the association effect https://stable-diffusion-art.com/courses/stable-diffusion-level-2/lessons/association-effect/
- Use negative prompt to supresss anything and settings you don’t want to see.
Yes it is possible but I want to keep the UI simple.
If there’s enough interest, I can create another notebook for downloading models to google drive.
Hi, both the A1111 and Forge notebook are now fixed.
Interesting. An option is to combine them – use flux to generate the initial image followed by SDXL with img2img at a low denoising strength.
Hi, can you post a Flux image that looks plastic and one from Juggernaut that you think is good?
I need to see what you are looking for before giving advice.
LoRA should be the go-to method for modifying models. A LoRA can also modify CLIP, but the main effect is in modifying the diffusion model.
I haven’t done a comparison, but sampling is a pretty standard process. I don’t think they would do anything different.
Hi, we need three models to use a diffusion model like Stable Diffusion and Flux
- Diffusion model – for denoising during sampling.
- VAE – for converting the images between pixel and latent spaces.
- CLIP – for encoding to text prompt for conditioning during sampling.
Some checkpoints include all three in a single checkpoint file. Even if others don’t, the Load Checkpoint node uses the default VAE and CLIP models.
In addition to the original VAE, there are improved or finetuned versions, although they are rare. You can use the Load VAE node to specify the VAE you want. Typically, the difference is minimal.
Likewise, you can load the CLIP models directly using a node. Some models, like Flux, use two text encoders. Using the Dual CLIP loader allows you to put different text prompts to different encoders. Some people swear to see a difference between the two, but this remains an under-explored area.
Hi, we need three models to use a diffusion model like Stable Diffusion and Flux
Diffusion model – for denoising during sampling.
VAE – for converting the images between pixel and latent spaces.
CLIP – for encoding to text prompt for conditioning during sampling.
Some checkpoints include all three in a single checkpoint file. Even if others don’t, the Load Checkpoint node uses the default VAE and CLIP models.In addition to the original VAE, there are improved or finetuned versions, although they are rare. You can use the Load VAE node to specify the VAE you want. Typically, the difference is minimal.
Likewise, you can load the CLIP models directly using a node. Some models, like Flux, use two text encoders. Using the Dual CLIP loader allows you to put different text prompts to different encoders. Some people swear to see a difference between the two, but this remains an under-explored area.
It’s great to see you figured it out. It’s a warning, so nothing to worry about.
Mmm… The only things I can think of are:
- Use a T4 instance (that’s what I used)
- Remove or rename the custom_nodes folder in AI_PICS > ComfyUI
Hi David, I just tested the workflow on Colab, and it works correctly. I used the dreamshaper model and the motion model in the workflow. Did you change any settings?
Good find! I have fixed the issue.
I haven’t used it but it uses Kohya in the backend, so it should work.
Yes, I am in the process of writing the lessons. Should be up within a week.
-
-
AuthorPosts