Andrew

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 178 total)
  • Author
    Posts
  • in reply to: Deforum and Forge #16031
    AvatarAndrew
    Keymaster

      Like many A1111 extensions, deforum is not compatible with forge. Even the forge version of deforum is broken. You can only use it on A1111.

      in reply to: Include rsources needed in course intro #16030
      AvatarAndrew
      Keymaster

        Agreed. I will add the relevant info to the course.

        in reply to: Say hi and introduce yourself! #16029
        AvatarAndrew
        Keymaster

          Welcome Melanie! I’m happy that you are happy 😊

          in reply to: Say hi and introduce yourself! #15976
          AvatarAndrew
          Keymaster

            Welcome Jared! You come at the right time. Local AI videos have started to mature.

            in reply to: Getting Colab Automatic1111 Working for the first time #15910
            AvatarAndrew
            Keymaster

              Yes, this could be.

              in reply to: 30 Days Using Stable Diffusion Art with Scholar Monthly #15896
              AvatarAndrew
              Keymaster

                Thanks for the feedback! It’s great that you find good value in the membership.

                I aim to spread the knowledge while getting somewhat compensated (and justified) for my time. I’m glad to find a solution that works both ways.

                in reply to: Local Install vs GPU / Render Farms (online GPU) #15895
                AvatarAndrew
                Keymaster

                  I agree it’s not entirely straightforward. I usually deal with memory issues as they arise. This can be done using a more memory-efficient version of the model (fp8, fp4), using a smaller image size, and unloading models from memory, etc.

                  in reply to: Getting Colab Automatic1111 Working for the first time #15894
                  AvatarAndrew
                  Keymaster

                    The HiRes fix function often results in memory issues. I am not sure what’s wrong with the implementation. Maybe a flux workflow with increasing batch size or image size is a better way to test.

                    in reply to: Stable diffusion video obsessed #15886
                    AvatarAndrew
                    Keymaster

                      Welcome, Jessica! Thank you for sharing your beautiful work. It is a great remix of “What is life”!

                      I’m sorry I didn’t notice your message awaiting my moderation.

                      AvatarAndrew
                      Keymaster

                        This is a good suggestion. Instructions for A1111 and Forge should be interchangeable. I will probably add new ComfyUI ones.

                        in reply to: Getting Colab Automatic1111 Working for the first time #15881
                        AvatarAndrew
                        Keymaster

                          Thanks for the suggestions.

                          When I wrote the course, I tried to keep it agnostic to how to use A1111 (local/colab/online service). So I didn’t write much about instructions specific to the Colab notebook. I will add them to lessons that require additional extensions.

                          in reply to: Local Install vs GPU / Render Farms (online GPU) #15880
                          AvatarAndrew
                          Keymaster

                            Several factors determine the relationship between model size and the required VRAM.

                            1. Not all model parts need to be in the memory simultaneously. For example, The CLIP model can be unloaded after processing the prompt, and the VAE is required only after sampling.  So, the VRAM required is smaller than the model size.
                            2. A model’s size is measured by the number of parameters. A parameter can be represented in different precisions in a GPU, e.g., FP32 (high), FP16, and FP8 (low). The lower the precision, the smaller the size of a GPU, but the quality may be reduced.

                            Optimizations like these enable fixing large models in limited VRAMs.

                             

                             

                            in reply to: Local Install vs GPU / Render Farms (online GPU) #15820
                            AvatarAndrew
                            Keymaster

                              I use all cloud services in my day job. There’s no hardware maintenance and no noise when running a heavy job. I am fortunate enough that I was allowed to keep the VM running 24/7. I won’t enjoy it as much if I need to shut it down every day.

                              Some online services, like Think Diffusion, auto-shutdown after a specific amount of time. This is not a bad way to control costs.

                              in reply to: 24gb VRAM vs Architecture series #15819
                              AvatarAndrew
                              Keymaster

                                If you are on the fence, using an online service is not a bad idea. It is probably cheaper than owning one if you are a casual user.

                                in reply to: Feedback on level 2 of the course #15811
                                AvatarAndrew
                                Keymaster

                                  Thanks for the suggestions. Will make some changes.

                                Viewing 15 posts - 1 through 15 (of 178 total)