Getting Colab Automatic1111 Working for the first time

Viewing 12 reply threads
  • Author
    Posts
    • #15705
      AvatarZandebar
      Participant

        Hello

        Prerequisite:

        Not new to computers, but completely new to Stable Diffusion interfaces’ (WebUIs’) and Coblab not seen Colab until this:

        https://stable-diffusion-art.com/automatic1111-colab/#Step-by-step_instructions_to_run_the_Colab_notebook

        OK something new bright and shiny! OH! am such a Magpie…

        I have got Automatic1111 working in Windows and Linux (PopOS, Rocky) and have decided to have a look at running Automatic1111 in the cloud.
        As I’m seeing if it’s worth buying a new GPU for local install vs cloud, and the only way to work that out is to get your feet wet.
        As VRAM is all important when it comes to Stable Diffusion I’m thinking it’s probably not worth buying the hardware as this is going to change.
        where 24Gb is the norm now, given the RTX5090 is around the corner with 32Gb (reportedly), this is going to keep changing as new versions keep coming out each cycle.
        In my view am going to be playing catchup each iteration and as a hobbyist and none gamer I cannot afford to buy the latest and greatest GPU as they come out.
        Looking at the cloud ‘Am I’ and looking at the GPU’s chewing the cud in the render farms’ with all that lovely VRAM to be had, OOOh! I can smell it sizzling.
        Clearly looking like the better option for me, I’ve done the math & the cloud is looking like a better option but am not sure.

        I’m in the UK and its pounds (£) for US dollars ($) when it comes to hardware, so I will use US dollars for this example.

        Today on Amazon –
        ASUS TUF GeForce RTX 4090 24GB OG Edition Gaming Graphics Card (NVIDIA GeForce RTX4090 DLSS 3, PCIe 4.0, 24GB GDDR6X, 2X HDMI 2.1a, 3X DisplayPort 1.4a, TUF-RTX4090-24G-OG-GAMING-GAMING.)
        $2,275.00

        I picked this card as it’s mid-range price wise

        So an average 4090 GPU price of $2275

        Gives you a monthly budget to spend on a render farm given a hardware cycle

        24 Months $94.79

        36 Months $63.19

        48 Months $47.39

        60 Months $37.91

        My last GPU purchase was in 2019 where I bought a mid-range Nvidia RTX2070 Super 8gb, I’m 2 generations behind now, but am still able to learn Stable Diffusion on a local install, which has been great.
        I know I’ll soon out grow this, the card I can afford now to purchase is a RTX4080 super ti with 16gb vram, I know this isn’t enough and I’ll most likely out grow this quickly.
        I’ll still be able to do some stuff locally, at this stage, I don’t know what the limitations will be with mid-range hardware, that’s why the cloud option looks attractive as I don’t have to worry about hardware and VRAM.
        As you clearly can see I fit in the 60 month cycle given my recent history of need, and with lets say $40 a month to spend I should be OK with this type of flexibility.
        I’m just wondering should I stick with the RTX2070 Super for now or get the RTX4080 super ti for the faster local speeds and double the VRAM, still trying to work out the best option.

        I’m thinking about using the 16gb VRAM until I hit a wall then page out to the cloud when I need more VRAM, but Stable Diffusion images are not like taking your photo’s to the printer, as they generate different each time.
        So I can’t do the donkey / mule work on a low powered card and reproduce on a high end card later as the image will be different, as you can tell I’m still working this angle out too.

        Right! So you know my headspace and where I’m at, to the issue at hand…

         

        OK on the webpage:

        https://stable-diffusion-art.com/automatic1111-colab/#Step-by-step_instructions_to_run_the_Colab_notebook

        I clicked on the BIG green button and went with the defaults all the way to see what would happen, this returned the error below


        MessageError Traceback (most recent call last)
        <ipython-input-6-63ea7f3d08bc> in <cell line: 416>()
        414 # connect to google drive
        415 from google.colab import drive
        –> 416 drive.mount(‘/content/drive’)
        417 output_path = ‘/content/drive/MyDrive/’ + output_path
        418 root = ‘/content/’

        3 frames
        /usr/local/lib/python3.10/dist-packages/google/colab/_message.py in read_reply_from_input(message_id, timeout_sec)
        101 ):
        102 if ‘error’ in reply:
        –> 103 raise MessageError(reply[‘error’])
        104 return reply.get(‘data’, None)
        105

        MessageError: Error: credential propagation was unsuccessful

         

        I know I didn’t break anything, I just didn’t do something, I followed the instructions but being a newbie to this they’re not STUPID friendly.
        I didn’t know what I was meant to be doing, what parameters I needed to use, I understood some stuff and got lost on others.
        What didn’t I do, it mentions google drive, I connected to that, then OOH a Linux path I understand that…OHH great am on a Linux server NICE!!!

        Andrew if you are there, as I’ve mentioned in my emails to you, your in the know and I’m not so I’m the best person you can use to help get these instructs helpful for students.
        I’m happy to help in anyway I can, but as always it depends on so many variables and I take it, it’s like docker with the environments being the notebooks.

        Any help would be great, back to local install for the next SD course…

        All the best
        Michael

      • #15715
        AvatarAndrew
        Keymaster

          It seems that the notebook couldn’t connect to your Google Drive. Do you have a Google account? If so, it should ask you to grant permission to access your Google Drive. This is necessary for saving images and accessing models.

          I can improve the tutorial if you have time to sort this out.

          I have a 4090 and use the notebooks from time to time. Here’s my experience.

          • For casual users, a cloud option is more cost-effective.
          • Because of the interactive nature of using the WebUIs, you can pay for a lot of idling time when using cloud. This may cause you to shut down machines and interrupt the enjoyment of creating images.
          • A PC with a high-power GPU may require unexpected maintenance. The power supply is easier to break, for example. However, you don’t need to worry about hardware when using the cloud.
          • There’s a startup time for cloud. You can have the SD server running 24/7 for a local setup.

          For me, I stick with a local setup because it allows me to use SD intermittently without waiting for it to start.

        • #15800
          AvatarZandebar
          Participant

            Yes I can help you sort this

          • #15801
            AvatarZandebar
            Participant

              Yes I can help you sort this

            • #15871
              AvatarZandebar
              Participant

                Hi Andrew

                 

                Hope you are keeping well..

                 

                I’ve got Colab working now with A1111, sorry for the delay in getting back to you.

                It seems that the notebook couldn’t connect to your Google Drive. Do you have a Google account? If so, it should ask you to grant permission to access your Google Drive. This is necessary for saving images and accessing models.

                 

                Yes.. I have a Google account.

                 

                I too saw that Google drive couldn’t connect to my Google drive space from the error message and your exactly right when it came to permissions.  I was rushing as I didn’t have a lot of time and was quickly trying to set it up and have a look.  I didn’t read the script line for line, so I wasn’t going to hit the select all button without knowing what it was doing in the background (I just selected a few). As you know in computing you grant the least amount of permissions needed to get the job done, Linux approach not the MS one ;-).

                 

                After reading the script I was more confident to select the ones needed to get the required access to the drive, I left two out.

                 

                I suggest you put an image in the instructions for the least amount of permissions needed to connect to  Google Drive to enable A1111 to function in Colab with a Google Drive connection.

                 

                Also:

                I’ve done part of the courses 1-3 (part of 3)  in A1111 on a local install, now am trying Colab am now working out what add-ons I need to complete the rest of the course.

                 

                I suggest you have a quick look up of what’s needed for the completion of course 1-4 on each course at the start. As in course X you need these addon’s installed to be able to complete this course, list what’s needed, then you can check these off on the Colab install to ensure its all correct.

                 

                There are parts in the instructions where you gloss over parts of it without going into the reasoning behind it, such as ngrok I’m coming to this for the first time and don’t know the terminology and the reasons why this would be better. A little more explanation in that area is needed and would improve the experience.

                 

                Is the notebook a one time thing or does the setup reside on Google drive for later use, I can see the name of the notebook there. So I’m thinking if I click on it, it may load up the last setup I entered. I’ve just clicked on it and its just a link to Colab, but that would be useful in my opinion to be able to click on a pre-made setup which is stored in your Google drive. Just a thought…

                 

                This is a little hazy; but you’ve done a great job of Lets Get You Started, perhaps consider adding a what you need to know about Colab course and enable the user to become a power user of Colab when it comes to generative ai on the Colab platform.

                 

                After a quick look and a play on Colab, I think I will get more milage out of a online GPU than a local one, but that’s my bias at this stage, it may change over time. It’s important to learn the basics first and build that foundation first before jumping to any conclusion. It’s always fun discussing what’s out there and looking at other options.

                 

                All the best

                 

                Zandebar

                 

                • This reply was modified 1 month, 1 week ago by AvatarZandebar.
              • #15881
                AvatarAndrew
                Keymaster

                  Thanks for the suggestions.

                  When I wrote the course, I tried to keep it agnostic to how to use A1111 (local/colab/online service). So I didn’t write much about instructions specific to the Colab notebook. I will add them to lessons that require additional extensions.

                • #15889
                  AvatarZandebar
                  Participant

                    Hello

                    I’m just trying to work out my parameters on Colab and I thought OOOH VRAM 40GB, lets ramp it up and see what it can do. OK, I broke something any chance you can explain how far you can take these setting, I thought a upscale shouldn’t tax the GPU too much.

                     

                    I set the notebook running with:

                    Colab A100

                    System RAM
                    1.6 / 83.5 GB

                    GPU RAM
                    0.0 / 40.0 GB

                    Disk
                    53.8 / 235.7 GB

                     

                    From: Stable Diffusion – Level 3 > End-to-end workflow: ControlNet > Generate txt2img with ControlNet

                     

                    Model: dreamshaper_8

                     

                    Prompt:
                    AS-YoungV2, futuristic, Full shot of a young woman, portrait of beautiful woman, solo, pliant, Side Part, Mint Green hair, wearing a red Quantum Dot Reindeer Antler Headpiece outfit, wires, christmas onyx, green neon lights, cyberpunkai, in a Hydroponic mistletoe gardens in futuristic homes with a Robotically animated Christmas displays in public spaces. ambience, shot with a Mamiya Leaf, Fujifilm XF 16mm f/2.8 R WR lens, ISO 6400, f/1.2, Fujifilm Superia X-Tra 400, , (high detailed skin:1.2), 8k uhd, dsir, soft lighting, high quality, film grain,

                     

                    Negative Prompt:
                    BadDream, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, 2d, 3d, illustration, sketch, nfsw, nude

                     

                    Sampling method: DPM++ 2M Karras
                    Steps: 30
                    Refiner: Not used
                    Hires fix: Did on some
                    CFG scale: 7
                    Seed: -1
                    Size: 512 x 768

                     

                    This works set to Batch size 4 at these settings, good so far its working.

                    Then I hit the Hires. fix:

                    Defaults: Upscaler- Latent > Denoising strength – 0.7

                     

                    This worked @ x2 from 512×768 to 1024 x 1536 – Batch size 1

                     

                    Then got an error @ x4  from 512×768 to 2048 x 3072 – Batch size 1

                     

                    OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 GiB. GPU 0 has a total capacity of 39.56 GiB of which 35.46 GiB is free. Process 73993 has 4.10 GiB memory in use. Of the allocated memory 3.46 GiB is allocated by PyTorch, and 127.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

                    Time taken: 50.9 sec.

                     

                    I can see what’s going on here from the error message; as the render ran out of memory now I need to understand boundaries in Colab and my expectations of this system. Am just concerned when it comes to combining certain features, I need to understand the limits and the process of items coming in and out of vRAM.

                     

                    Here I think I should go, OK it fell over on x4 so do it x2 and then take it to the Extras tab and upscale it from there. I get the logic but it’s more about my expectations of the system and how not to brake it with too much VRAM requests.

                    • This reply was modified 1 month, 1 week ago by AvatarZandebar.
                  • #15891
                    AvatarZandebar
                    Participant

                      Please see above post as I couldn’t edit it to add this.

                       

                      OK!

                       

                      I’ve just repeated what I did in Colab on my local machine which has a RTX 2070 super 8GB VRAM card installed. Which completed the task and produced an image, rendered in correctly but never the less completed the task.

                      My RTX 2070 super 8GB shouldn’t be able to out perform a 40GB VRAM A100 the Colab used out of the box settings. Which leads me to think that this is a config issue rather than a card issue. I’m a little bit lost for word here, so I need to know what’s going on here.

                       

                      Settings used on the RTX 2070 super 8GB to complete the image.

                      AS-YoungV2, futuristic, Full shot of a young woman, portrait of beautiful woman, solo, pliant, Side Part, Mint Green hair, wearing a red Quantum Dot Reindeer Antler Headpiece outfit, wires, christmas onyx, green neon lights, cyberpunkai, in a Hydroponic mistletoe gardens in futuristic homes with a Robotically animated Christmas displays in public spaces. ambience, shot with a Mamiya Leaf, Fujifilm XF 16mm f/2.8 R WR lens, ISO 6400, f/1.2, Fujifilm Superia X-Tra 400, , (high detailed skin:1.2), 8k uhd, dsir, soft lighting, high quality, film grain,
                      Negative prompt: BadDream, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, 2d, 3d, illustration, sketch, nfsw, nude
                      Steps: 30, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 4093592187, Size: 512×768, Model hash: 879db523c3, Model: dreamshaper_8, Denoising strength: 0.7, Hires upscale: 4, Hires upscaler: Latent, Version: v1.10.1
                      Time taken: 8 min. 45.3 sec.

                    • #15894
                      AvatarAndrew
                      Keymaster

                        The HiRes fix function often results in memory issues. I am not sure what’s wrong with the implementation. Maybe a flux workflow with increasing batch size or image size is a better way to test.

                      • #15905
                        AvatarZandebar
                        Participant

                          Am I right in saying that in one of the A1111 courses you said its better to upscale x2 in HiRes then take it to the Extras tab.

                          I was just surprised that my RTX 2070 handled 4x and Colab didn’t, surely its a config issue….

                        • #15910
                          AvatarAndrew
                          Keymaster

                            Yes, this could be.

                          • #16323
                            AvatarZandebar
                            Participant

                              Hello

                              Does anyone know how exactly Colab billing works, as I’ve read that any remaining compute units will rollover and stay valid for 3 months. Then I’ve read that I will receive another 100 compute units at next billing, this part gets a little confusing.  As it appears that any remaining units are not taken in to account,  I’ve only used 18.49 compute units this month leaving 81.51 compute units left.  As per their information at next billing my compute units will goto 100,  whereby not adding 100 compute units to my remaining units totalling 181.51 units.

                              I’ve now cancelled Colab so I don’t lose any compute units and my remaining units are still in the account, can anyone expand on this.  As it looks like Google Colab are stealing remaining compute units from any unused units no matter how many compute units are left, at the end of each billing cycle. Just refreshing the total to 100 compute units at the start of each billing cycle, am sure I’ve got this wrong as Google wouldn’t do that, would they. As those remaining units are bought and paid for.

                              • #16326
                                AvatarAndrew
                                Keymaster

                                  I have a Colab Pro subscription. They add 100 units to my account at billing up to 300 units. At times, I canceled and resubscribed, the compute units remain in my account.

                                  • This reply was modified 2 weeks, 5 days ago by AvatarAndrew.
                              • #16327
                                AvatarZandebar
                                Participant

                                  Colab pro subscription is what I did have then wasn’t sure what would happen at billing, will have to see. I’ll resubscribe and see I’ll do that now…..

                                   

                                  You are subscribed to Colab Pro. Learn more
                                  Available: 181.51 compute units
                                  Usage rate: approximately 0 per hour
                                  You have 0 active sessions.

                                   

                                  That seems to have done it, will have to see what happens when it rolls over next month if it just adds it, thats presuming I have any units left.  Now that I’ve learnt what things do in ComfyUI I’ll be experimenting more and getting to know SD when I need more than 8gb of vram.

                                   

                                  The confusing bit was I asked Gemini what would happen and it advised that the compute units wouldn’t be added, am good for now.

                                   

                                  Thank You for your response..

                              Viewing 12 reply threads
                              • You must be logged in to reply to this topic.