Andrew

Forum Replies Created

Viewing 15 posts - 16 through 30 (of 187 total)
  • Author
    Posts
  • in reply to: Local Install vs GPU / Render Farms (online GPU) #15895
    AndrewAndrew
    Keymaster

      I agree it’s not entirely straightforward. I usually deal with memory issues as they arise. This can be done using a more memory-efficient version of the model (fp8, fp4), using a smaller image size, and unloading models from memory, etc.

      in reply to: Getting Colab Automatic1111 Working for the first time #15894
      AndrewAndrew
      Keymaster

        The HiRes fix function often results in memory issues. I am not sure what’s wrong with the implementation. Maybe a flux workflow with increasing batch size or image size is a better way to test.

        in reply to: Stable diffusion video obsessed #15886
        AndrewAndrew
        Keymaster

          Welcome, Jessica! Thank you for sharing your beautiful work. It is a great remix of “What is life”!

          I’m sorry I didn’t notice your message awaiting my moderation.

          AndrewAndrew
          Keymaster

            This is a good suggestion. Instructions for A1111 and Forge should be interchangeable. I will probably add new ComfyUI ones.

            in reply to: Getting Colab Automatic1111 Working for the first time #15881
            AndrewAndrew
            Keymaster

              Thanks for the suggestions.

              When I wrote the course, I tried to keep it agnostic to how to use A1111 (local/colab/online service). So I didn’t write much about instructions specific to the Colab notebook. I will add them to lessons that require additional extensions.

              in reply to: Local Install vs GPU / Render Farms (online GPU) #15880
              AndrewAndrew
              Keymaster

                Several factors determine the relationship between model size and the required VRAM.

                1. Not all model parts need to be in the memory simultaneously. For example, The CLIP model can be unloaded after processing the prompt, and the VAE is required only after sampling.  So, the VRAM required is smaller than the model size.
                2. A model’s size is measured by the number of parameters. A parameter can be represented in different precisions in a GPU, e.g., FP32 (high), FP16, and FP8 (low). The lower the precision, the smaller the size of a GPU, but the quality may be reduced.

                Optimizations like these enable fixing large models in limited VRAMs.

                 

                 

                in reply to: Local Install vs GPU / Render Farms (online GPU) #15820
                AndrewAndrew
                Keymaster

                  I use all cloud services in my day job. There’s no hardware maintenance and no noise when running a heavy job. I am fortunate enough that I was allowed to keep the VM running 24/7. I won’t enjoy it as much if I need to shut it down every day.

                  Some online services, like Think Diffusion, auto-shutdown after a specific amount of time. This is not a bad way to control costs.

                  in reply to: 24gb VRAM vs Architecture series #15819
                  AndrewAndrew
                  Keymaster

                    If you are on the fence, using an online service is not a bad idea. It is probably cheaper than owning one if you are a casual user.

                    in reply to: Feedback on level 2 of the course #15811
                    AndrewAndrew
                    Keymaster

                      Thanks for the suggestions. Will make some changes.

                      in reply to: 24gb VRAM vs Architecture series #15810
                      AndrewAndrew
                      Keymaster

                        The newer architectures have some new optimization techniques and can be faster in training and using models.

                        SD models use GPU differently from gaming applications. A GPU card’s FLOPS number (floating number operations per second) is a good gauge for performance.

                        4090 is for sure faster than 3090 but they should generate the same image with the same setting. The only difference is how long you wait.

                        I would only consider 24GB+ VRAM if I buy a GPU card now. Consider it an investment to future-proof your system. A slower card means you need to wait longer. A low-VRAM card means you cannot run certain models at all. (Or you need to jump through hoops to do it)

                        But if you are happy with the toolset now – SD 1.5, SDXL, Flux, getting a 16GB card is not a bad idea to save some money.

                         

                        in reply to: Hello #15794
                        AndrewAndrew
                        Keymaster

                          Welcome!

                          in reply to: Getting Colab Automatic1111 Working for the first time #15715
                          AndrewAndrew
                          Keymaster

                            It seems that the notebook couldn’t connect to your Google Drive. Do you have a Google account? If so, it should ask you to grant permission to access your Google Drive. This is necessary for saving images and accessing models.

                            I can improve the tutorial if you have time to sort this out.

                            I have a 4090 and use the notebooks from time to time. Here’s my experience.

                            • For casual users, a cloud option is more cost-effective.
                            • Because of the interactive nature of using the WebUIs, you can pay for a lot of idling time when using cloud. This may cause you to shut down machines and interrupt the enjoyment of creating images.
                            • A PC with a high-power GPU may require unexpected maintenance. The power supply is easier to break, for example. However, you don’t need to worry about hardware when using the cloud.
                            • There’s a startup time for cloud. You can have the SD server running 24/7 for a local setup.

                            For me, I stick with a local setup because it allows me to use SD intermittently without waiting for it to start.

                            in reply to: Loading models into Colab from URL #15711
                            AndrewAndrew
                            Keymaster

                              Do you have example inputs that don’t work? It could potentially be improved.

                              in reply to: Clip skip dropdown in Forge notebook for Colab #15710
                              AndrewAndrew
                              Keymaster

                                Interesting, it seems to be by design. You will need to put it a bug report in the forge github page and ask if anyone can fix it.

                                in reply to: Hello #15709
                                AndrewAndrew
                                Keymaster

                                  Welcome!

                                Viewing 15 posts - 16 through 30 (of 187 total)