Zandebar

Forum Replies Created

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
    Posts
  • in reply to: Include rsources needed in course intro #15984
    AvatarZandebar
    Participant

      Hello All

      Andrew: I must 2nd David’s post, as I was having problems moving from a local install to Colab where I needed to know which assets were needed for a Colab session.

       

      I suggest you have a quick look up of what’s needed for the completion of course 1-4 on each course at the start. As in course X you need these addon’s installed to be able to complete this course, list what’s needed, then you can check these off on the Colab install to ensure its all correct.

      https://stable-diffusion-art.com/forums/topic/getting-colab-automatic1111-working-for-the-first-time/

      I Suggested at the start of each Course, maybe a more granular per section of the course, I also have trouble knowing which models you are using for your examples. You mostly sat which ones but there has been a good few time where you haven’t. I just used one in my list to get past that point, I think David is quite right in suggesting what he did.

      Something like: Assets needed for course and then per section assets needed for section.

      Being new to Colab and it’s UI the would for me be a great benefit, until I get used to using the web app on an online server.

       

      Your response to my suggestion was:

      Thanks for the suggestions.

      When I wrote the course, I tried to keep it agnostic to how to use A1111 (local/colab/online service). So I didn’t write much about instructions specific to the Colab notebook. I will add them to lessons that require additional extensions.

       

      I see where you are coming from, but that doesn’t help when you do a course over multiple days and need to get setup each time for that course. The local install is less important as you set things up along the way, so on that front I’ve not had an issue. The issue raised is Colab whereby you have to setup the installation each time you start the service. So multiple days (on course) or having to restart Colab as David pointed out to add assets which you didn’t think you needed at start, doesn’t help the student.

      Am I right in saying that the amount of assets which get loaded up into your Colab session affects your compute credits. So needlessly having to restart a session to add assets, has a counter affect on your compute credits?

      I’ve had Colab now for 2-3 weeks and I’ve avoided using Colab vs local install, because I don’t know which assets to add in the Colab session for the course. The local install just picks up where you last left off in that section of the course, but Colab doesn’t do that. Out of my 100 compute credits I’ve only used 15 and that was just playing around with the Colab setup.

       

      In my view David is quite CORRECT  in pointing this out.

      in reply to: Getting Colab Automatic1111 Working for the first time #15905
      AvatarZandebar
      Participant

        Am I right in saying that in one of the A1111 courses you said its better to upscale x2 in HiRes then take it to the Extras tab.

        I was just surprised that my RTX 2070 handled 4x and Colab didn’t, surely its a config issue….

        in reply to: Local Install vs GPU / Render Farms (online GPU) #15904
        AvatarZandebar
        Participant

          I’ve decided that I’m going to hang on upgrading the GPU for when Nvidia launch the RTX 50 Series, and see how they perform. Given the stats it should be impressive, only time will tell,  also the tech issues they’ve had in production. I’m just worried that it may leak into an Intel type issue with the hardware, 12 months after release should be enough time to work this out. Then your 12 months behind, I was kind of hoping to pick up a 4090 in my budget after the launch but that’s looking unlikely as they’ve reduced available stock. Which is keeping the price the same, I’ve been looking in the Black Friday sales and prices remain the same. Which is a bit of a surprize given that the launch of the 50 series is well known, they’ve handled that well to maintain the price.

           

          It’s looking most likely that I’ll end up with a RTX ??80 series of some description with 16GB, either 40 0r maybe 50 series, I may spring the extra cash and get the 5080 when it comes out. After the cards have been reviewed and then I’ll work out which way I’m going to jump.

          That’s why I’m really interested on the limitations and when I’ll need to use a  GPU server farm, maybe a better way to go in the long run. Reading around; the FLUX models can completely fill up the VRAM on consumer cards so I’m considering maybe it’s GPU server farms only.  I do need a performance lift with my present hardware GPU, it’s a matter of working out the pro’s and con’s.

           

          I still don’t know where I’m heading with SD I have to work that out first.

           

           

          in reply to: 30 Days Using Stable Diffusion Art with Scholar Monthly #15902
          AvatarZandebar
          Participant

            No Problem!!

            Your doing something great with this website and I can see that you have put a lot of time and effort into making this educational site what it is.  It’s also a great time saver for me, as it’s all laid out nicely and easy to understand no effort is needed on my side. Which is why its worth the fee and you deserve to be financially compensated for your efforts and the tools you use to get everything up and running.

             

            Combine that with a day job and you have true PASSION! 😉

            • This reply was modified 5 days, 22 hours ago by AvatarZandebar.
            in reply to: Local Install vs GPU / Render Farms (online GPU) #15892
            AvatarZandebar
            Participant

              That does add an extra layer of complication to what I’m trying to work out.

              It looks like I’m going to need some time to get my head around the GPU issue and what it can or can’t do at a certain VRAM.

              I just need to work out lets say, @ 16Gb when would I need to use an online GPU service to render a certain model, surely I should be able to look at a model and say OK that model won’t work on this local GPU. Therefore,  if I want to use that model I’ll need to use an online GPU service. With that explanation it’s not so straight forward and its just a matter of when the GPU will crash and give you an error. Surely that’s not the case is it? As you should be able to apply some logic somewhere.

              in reply to: Getting Colab Automatic1111 Working for the first time #15891
              AvatarZandebar
              Participant

                Please see above post as I couldn’t edit it to add this.

                 

                OK!

                 

                I’ve just repeated what I did in Colab on my local machine which has a RTX 2070 super 8GB VRAM card installed. Which completed the task and produced an image, rendered in correctly but never the less completed the task.

                My RTX 2070 super 8GB shouldn’t be able to out perform a 40GB VRAM A100 the Colab used out of the box settings. Which leads me to think that this is a config issue rather than a card issue. I’m a little bit lost for word here, so I need to know what’s going on here.

                 

                Settings used on the RTX 2070 super 8GB to complete the image.

                AS-YoungV2, futuristic, Full shot of a young woman, portrait of beautiful woman, solo, pliant, Side Part, Mint Green hair, wearing a red Quantum Dot Reindeer Antler Headpiece outfit, wires, christmas onyx, green neon lights, cyberpunkai, in a Hydroponic mistletoe gardens in futuristic homes with a Robotically animated Christmas displays in public spaces. ambience, shot with a Mamiya Leaf, Fujifilm XF 16mm f/2.8 R WR lens, ISO 6400, f/1.2, Fujifilm Superia X-Tra 400, , (high detailed skin:1.2), 8k uhd, dsir, soft lighting, high quality, film grain,
                Negative prompt: BadDream, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, 2d, 3d, illustration, sketch, nfsw, nude
                Steps: 30, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 4093592187, Size: 512×768, Model hash: 879db523c3, Model: dreamshaper_8, Denoising strength: 0.7, Hires upscale: 4, Hires upscaler: Latent, Version: v1.10.1
                Time taken: 8 min. 45.3 sec.

                in reply to: Getting Colab Automatic1111 Working for the first time #15889
                AvatarZandebar
                Participant

                  Hello

                  I’m just trying to work out my parameters on Colab and I thought OOOH VRAM 40GB, lets ramp it up and see what it can do. OK, I broke something any chance you can explain how far you can take these setting, I thought a upscale shouldn’t tax the GPU too much.

                   

                  I set the notebook running with:

                  Colab A100

                  System RAM
                  1.6 / 83.5 GB

                  GPU RAM
                  0.0 / 40.0 GB

                  Disk
                  53.8 / 235.7 GB

                   

                  From: Stable Diffusion – Level 3 > End-to-end workflow: ControlNet > Generate txt2img with ControlNet

                   

                  Model: dreamshaper_8

                   

                  Prompt:
                  AS-YoungV2, futuristic, Full shot of a young woman, portrait of beautiful woman, solo, pliant, Side Part, Mint Green hair, wearing a red Quantum Dot Reindeer Antler Headpiece outfit, wires, christmas onyx, green neon lights, cyberpunkai, in a Hydroponic mistletoe gardens in futuristic homes with a Robotically animated Christmas displays in public spaces. ambience, shot with a Mamiya Leaf, Fujifilm XF 16mm f/2.8 R WR lens, ISO 6400, f/1.2, Fujifilm Superia X-Tra 400, , (high detailed skin:1.2), 8k uhd, dsir, soft lighting, high quality, film grain,

                   

                  Negative Prompt:
                  BadDream, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, 2d, 3d, illustration, sketch, nfsw, nude

                   

                  Sampling method: DPM++ 2M Karras
                  Steps: 30
                  Refiner: Not used
                  Hires fix: Did on some
                  CFG scale: 7
                  Seed: -1
                  Size: 512 x 768

                   

                  This works set to Batch size 4 at these settings, good so far its working.

                  Then I hit the Hires. fix:

                  Defaults: Upscaler- Latent > Denoising strength – 0.7

                   

                  This worked @ x2 from 512×768 to 1024 x 1536 – Batch size 1

                   

                  Then got an error @ x4  from 512×768 to 2048 x 3072 – Batch size 1

                   

                  OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 GiB. GPU 0 has a total capacity of 39.56 GiB of which 35.46 GiB is free. Process 73993 has 4.10 GiB memory in use. Of the allocated memory 3.46 GiB is allocated by PyTorch, and 127.14 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

                  Time taken: 50.9 sec.

                   

                  I can see what’s going on here from the error message; as the render ran out of memory now I need to understand boundaries in Colab and my expectations of this system. Am just concerned when it comes to combining certain features, I need to understand the limits and the process of items coming in and out of vRAM.

                   

                  Here I think I should go, OK it fell over on x4 so do it x2 and then take it to the Extras tab and upscale it from there. I get the logic but it’s more about my expectations of the system and how not to brake it with too much VRAM requests.

                  • This reply was modified 1 week ago by AvatarZandebar.
                  in reply to: 24gb VRAM vs Architecture series #15873
                  AvatarZandebar
                  Participant

                    It’s kind of where I’m at, with the move to cloud computing and Nvidia moving away from desktop products due to miniaturisation (mini pc’s, tablets, laptops). I feel that am going to get screwed, as what I can afford to buy a GPU at with the budget I have, will always be entry level. I’ll be always playing catchup, I don’t know yet if the 90% of the rubbish / trash I’ll produce will matter with a cloud GPU. That’s what I’m trying to work out, plus apart from generative ai, I have no barrier with my current old GPU (rtx2070). I know that  I can’t afford a GPU that has 24gb VRAM locally for my present budget (they might drop in price),  to get the benefit for all the modules / workflows. With that,  VRAM going to always go up in size as  modules / workflows get bigger, its better to work smart and know your limitations.  Seeking out that better financial option for that creative data file, where craving for, plus am looking to go down that rabbit hole of video.

                     

                    I’m not sure where this journey is going to take me and there’s going to be surprizes along the way, who knows where I’ll end up.

                     

                    Its just about getting that experience in generative ai and being able to make smart discissions along the way.

                    • This reply was modified 1 week, 2 days ago by AvatarZandebar.
                    in reply to: Getting Colab Automatic1111 Working for the first time #15871
                    AvatarZandebar
                    Participant

                      Hi Andrew

                       

                      Hope you are keeping well..

                       

                      I’ve got Colab working now with A1111, sorry for the delay in getting back to you.

                      It seems that the notebook couldn’t connect to your Google Drive. Do you have a Google account? If so, it should ask you to grant permission to access your Google Drive. This is necessary for saving images and accessing models.

                       

                      Yes.. I have a Google account.

                       

                      I too saw that Google drive couldn’t connect to my Google drive space from the error message and your exactly right when it came to permissions.  I was rushing as I didn’t have a lot of time and was quickly trying to set it up and have a look.  I didn’t read the script line for line, so I wasn’t going to hit the select all button without knowing what it was doing in the background (I just selected a few). As you know in computing you grant the least amount of permissions needed to get the job done, Linux approach not the MS one ;-).

                       

                      After reading the script I was more confident to select the ones needed to get the required access to the drive, I left two out.

                       

                      I suggest you put an image in the instructions for the least amount of permissions needed to connect to  Google Drive to enable A1111 to function in Colab with a Google Drive connection.

                       

                      Also:

                      I’ve done part of the courses 1-3 (part of 3)  in A1111 on a local install, now am trying Colab am now working out what add-ons I need to complete the rest of the course.

                       

                      I suggest you have a quick look up of what’s needed for the completion of course 1-4 on each course at the start. As in course X you need these addon’s installed to be able to complete this course, list what’s needed, then you can check these off on the Colab install to ensure its all correct.

                       

                      There are parts in the instructions where you gloss over parts of it without going into the reasoning behind it, such as ngrok I’m coming to this for the first time and don’t know the terminology and the reasons why this would be better. A little more explanation in that area is needed and would improve the experience.

                       

                      Is the notebook a one time thing or does the setup reside on Google drive for later use, I can see the name of the notebook there. So I’m thinking if I click on it, it may load up the last setup I entered. I’ve just clicked on it and its just a link to Colab, but that would be useful in my opinion to be able to click on a pre-made setup which is stored in your Google drive. Just a thought…

                       

                      This is a little hazy; but you’ve done a great job of Lets Get You Started, perhaps consider adding a what you need to know about Colab course and enable the user to become a power user of Colab when it comes to generative ai on the Colab platform.

                       

                      After a quick look and a play on Colab, I think I will get more milage out of a online GPU than a local one, but that’s my bias at this stage, it may change over time. It’s important to learn the basics first and build that foundation first before jumping to any conclusion. It’s always fun discussing what’s out there and looking at other options.

                       

                      All the best

                       

                      Zandebar

                       

                      • This reply was modified 1 week, 2 days ago by AvatarZandebar.
                      in reply to: Local Install vs GPU / Render Farms (online GPU) #15846
                      AvatarZandebar
                      Participant

                        On the GPU hardware side, I’m having trouble working out what you can do in each VRAM stack of each card.

                        What are the limitation of each, what can’t you do with 12gb, 16gb, 24gb and 48gb (currently).

                        How much headroom do you need to give other resources in VRAM other than the model, I’m having a guess at 2gb, I don’t know if that’s right but it looks fair.

                        So that would potentially mean (am guessing).

                        • 12gb = 10gb Max model size
                        • 16gb =14gb Max model size
                        • 24gb = 22gb Max model size
                        • 48gb = 46gb Max model size

                         

                        How large do these models get, then there’s the workflow to consider how does that affect the VRAM?

                         

                        If you have a look at Flux.1 the information I found @

                        https://medium.com/@researchgraph/the-ultimate-flux-1-hands-on-guide-067fc053fedd

                        States:

                        The regular version requires at least 32GB of system RAM. Testing shows that a 4090 GPU can fully occupy its memory. The dev-fp8 version is recommended for local use.

                        * I assume when talking about system ram it means VRAM

                         

                        So what’s the difference of dev-fp8  to the regular model (this was covered in the courses)

                         

                        Would the GeForce RTX 4090 function using this model (GB size from Huggingface), we know that the new GeForce RTX 509o with 32gb (reportedly) will be able to handle this.

                        flux1-dev.safetensors = 23.8GB

                        flux1-schnell.safetensors = 23.8GB – This is the same size as pro

                         

                        Also Stable Diffusion 3.5 model

                        This model fits inside the VRAM of the GeForce RTX 4090 but not the GeForce RTX 4080 Super Ti @ 16GB

                        stable-diffusion-3.5-large = 16.5 GB

                        These are the new models coming out and if I’m correct, these are starting to be prohibited for hobbyist / creative enthusiasts who can’t afford high VRAM Flagship GPU’s.  By running these models locally, what I’m trying to get at,  what can you do with the best card you can afford.

                         

                         

                         

                         

                         

                        in reply to: 24gb VRAM vs Architecture series #15816
                        AvatarZandebar
                        Participant

                          Oops: I’ve made a mistake and I can’t edit

                          4080 Difference: +9.3% with the 5080

                          in reply to: 24gb VRAM vs Architecture series #15815
                          AvatarZandebar
                          Participant

                            Hello

                            Great and Thank You! You kind of confirmed what I was thinking.

                            Right: I have a bottle neck, I’m based in the UK I only have a £1000 GBP to spend on a GPU, I’m a hobbyist and will not be making any money from this to justify the expense and the outlay. However I’m also not sure where I’ll be going with this so I’m looking for a hybrid solution to GPU needs.

                            Let’s get this straight; in 12 months time when you get the RTX 5090 with 32GB VRAM, you’ll be saying Wow at the speed and recommending 32gb VRAM and not 24gb, when asked the very same question.

                            Granted if your a pro then you’ll need the FLAGSHIP option. When your not a pro (like me) justifying the expense becomes hard when your on a tight budget and you have household bills to pay, I can only dream of owning the latest and greatest GPU. There is a compromise a cheaper option or rent a GPU from a render farm, I’m actually looking at both at the moment.

                            NVidia GeForce RTX 4080 Super Ti 16gb VRAM  (I can afford right now), I’ll be able to learn SD and do a fair bit with 16GB VRAM. When I hit that wall and need extra VRAM I’ll out source the GPU to a render farm, with the render farm option I’ll just pay for what I use. This isn’t a good place to be really with the new RTX5000 series coming out, where you were only 2 thirds of the max VRAM the the 5000 series comes out your half the max size. Where the model size will only get bigger, I was bouncing around and saw a model size (flux) 14GB. Ouch! not much room for everything else that gets stored in VRAM. Chance are this size model would work in 16GB VRAM, and its only going to get bigger. We know that because of the increase of VRAM in the 5000 series, if you make more space people will fill more space. You can’t win being a hobbyist.

                            I was also thinking, wait long enough the 3090 may fit in my budget:

                            Do you get this craziness where you are?

                            EVGA GeForce RTX 3090 Ti FTW3 ULTRA GAMING, 24G-P5-4985-KR, 24GB GDDR6X, iCX3, ARGB LED, Backplate, Free eLeash

                            £2,094.22

                            MSI GeForce RTX 4090 VENTUS 3X E 24G OC Gaming Graphics Card – 24GB GDDR6X, 2550 MHz, PCI Express Gen 4, 384-bit, 2x DP v 1.4a, HDMI 2.1a (Supports 4K & 8K HDR)

                            £1,749.99

                            GIGABYTE GeForce RTX 4090 GAMING OC 24GB Graphics Card – 24GB GDDR6X, PCI-E 4.0, Core 2535Mhz, RGB fusion, Anti-sag bracket, Metal back plate, DP 1.4, HDMI 2.1a, NVIDIA DLSS 3, GV-N4090GAMING OC-24GD

                            £1,899.00

                             

                            Where the 4090 is cheaper than the 3090 CRAZY! OK the 3090 is not a toaster like the 4090 with the power socket issue. But still you would of thought there’ll be some rest bite for us hobbyist with an older series of card, Nah!! So where stuck at the next generation down, the 4080 ti super.

                            And wait for it, Nvidia are not doing themselves any favours with the next generation of cards now that they have no competition. Look at this…

                            RTX 5080
                            TDP: 350W
                            GPU Name: GB203
                            GPCs: 7
                            TPCs: 42
                            SMs: 84
                            Cores: 10752

                            Tensor Cores: (likely) 384 (half the number of RTX 5090)
                            Memory Configuration: 256-bit GDDR7 (16GB VRAM)

                            Boost clock speed around 2.8 GHz

                            RTX 4080
                            Architecture: Ada Lovelace
                            Process node: 4nm TSMC
                            CUDA cores: 9,728
                            Ray tracing cores: 76
                            Tensor cores: 304
                            Base clock speed: 2,205 MHz
                            Maximum clock speed: 2,505 MHz
                            Memory size: 16GB GDDR6X
                            Memory speed: 21 Gbps
                            Bus width: 256-bit
                            Bandwidth: 912 GBps
                            TBP: 320W

                            4080 Difference: +9.3% with the 5090

                            NVidia have got there head some where I can’t say here, but logically with the uplift in performance of the 5090, you would have thought a shift in the other models.

                            GeForce RTX 5000 to resemble something like this in vram: 12gb (5060), 16gb (5070), 24gb (5080) and 32gb (5090)

                            And the CUDA core count is not much higher, would have thought they’ll match the 4090 cores with the 5080. Core count @10752 you would have thought they’ll match at @16384 CUDA Cores. Given that’s its rumoured that the 5090 is having 21,760 CUDA cores. And the Tensor cores have dropped, maybe a good reason there but out of my scope.

                            Logically that makes more sense, it just leaves us users of the products’ frustrated, plus if the 5080 with imaginary 24gb VRAM and 16384 CUDA Cores. This would almost match the 4090 and cause a price drop of remining units of 4090. Everyone wins, but NO…

                            That’s why am waiting to see what the market does and see if these rumoured specs are true, and make a decision then. Either way the consumer is going to be at a dis-advantage give Nvidia previous history.

                            In the meantime: Checking out GPU farms and what they can offer is looking like a good idea and could in principle be more beneficial. That’s out of scope for this thread, I’ll make one on GPU farms…

                             

                            Kind Regards

                            Zandebar

                             

                             

                             

                             

                             

                             

                             

                             

                             

                            in reply to: Getting Colab Automatic1111 Working for the first time #15801
                            AvatarZandebar
                            Participant

                              Yes I can help you sort this

                              in reply to: Getting Colab Automatic1111 Working for the first time #15800
                              AvatarZandebar
                              Participant

                                Yes I can help you sort this

                              Viewing 14 posts - 1 through 14 (of 14 total)