Forum Replies Created
-
AuthorPosts
-
The newer architectures have some new optimization techniques and can be faster in training and using models.
SD models use GPU differently from gaming applications. A GPU card’s FLOPS number (floating number operations per second) is a good gauge for performance.
4090 is for sure faster than 3090 but they should generate the same image with the same setting. The only difference is how long you wait.
I would only consider 24GB+ VRAM if I buy a GPU card now. Consider it an investment to future-proof your system. A slower card means you need to wait longer. A low-VRAM card means you cannot run certain models at all. (Or you need to jump through hoops to do it)
But if you are happy with the toolset now – SD 1.5, SDXL, Flux, getting a 16GB card is not a bad idea to save some money.
Welcome!
November 9, 2024 at 11:19 am in reply to: Getting Colab Automatic1111 Working for the first time #15715It seems that the notebook couldn’t connect to your Google Drive. Do you have a Google account? If so, it should ask you to grant permission to access your Google Drive. This is necessary for saving images and accessing models.
I can improve the tutorial if you have time to sort this out.
I have a 4090 and use the notebooks from time to time. Here’s my experience.
- For casual users, a cloud option is more cost-effective.
- Because of the interactive nature of using the WebUIs, you can pay for a lot of idling time when using cloud. This may cause you to shut down machines and interrupt the enjoyment of creating images.
- A PC with a high-power GPU may require unexpected maintenance. The power supply is easier to break, for example. However, you don’t need to worry about hardware when using the cloud.
- There’s a startup time for cloud. You can have the SD server running 24/7 for a local setup.
For me, I stick with a local setup because it allows me to use SD intermittently without waiting for it to start.
Do you have example inputs that don’t work? It could potentially be improved.
Interesting, it seems to be by design. You will need to put it a bug report in the forge github page and ask if anyone can fix it.
Welcome!
OK, it should now work with links with question marks.
Try removing the text after the question mark, e.g.
https://civitai.com/api/download/models/953264?type=Model&format=SafeTensor&size=pruned&fp=fp16
to
https://civitai.com/api/download/models/953264
You can uncheck the “Clear” or “Clear_output” checkbox so that you can see the error message.
Some models need an API key to download. Did you create and use one?
Welcome Herbert! Sounds like you are an old timer. Yes, I maintain colab notebooks for running A1111, Forge, and ComfyUI as a community service…
SVD doesn’t use a prompt so there’s no way to specify camera motion.
I think you can do that with cogvideo. This workflow uses cogvideo image-to-video. You can get rid of the flux part to only use cogvideo.
https://stable-diffusion-art.com/flux-cogvideo-text-to-video/
Yes, loading from the tutorial page or the resources page: https://stable-diffusion-art.com/members-resources/
Hi, for local generation, we have
- SVD
- Cogvideo I2V
For online,
- Kling
- RunwayML
There are probably others I missed. Last time I test, Kling was the best.
Welcome! Taking the Stable Diffusion Levels 1 – 4 courses should give you a good foundation in prompts and settings.
I just tested the notebook, and it is working. Likely it was a temp issue.
-
AuthorPosts