Zandebar

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 23 total)
  • Author
    Posts
  • in reply to: Civitai exits the UK #18621
    Zandebar
    Participant

      Hello

      I can confirm VPN works, I waited till the 25th to test it with my basic VPN provider as I live in the UK. I think CivitAI will just do the bare minimum to comply, saying well we blocked access for UK users to comply with UK law. Don’t know if they’ll go as far to add VPN detection into the mix, unless basic addressing becomes a problem,  I think VPN’s will be fine for access for now. As people who use the site are more tech savvy 😉 and yet another example of Governments not understanding technology. Well at least it will make the people they serve feel :-/ like they’ve addressed the issue… So YEAH!!!! 😮 head firmly back in sand….

      The issue is that other countries may follow, even though I agree with the restrictions to restrict non adults (for that content), better implementation needs to be addressed other than a blanket block. This isn’t intended to call out CivitAI, I fully respect their discissions and their actions to be able to comply with regional laws (countries), but rather technology in place so companies such as CivitAI  are able to comply easily.

       

      • This reply was modified 5 days, 22 hours ago by Zandebar.
      in reply to: Support Of Nvidia RTX 50 Series / 5090 – Win11 #18464
      Zandebar
      Participant

        At this time I can’t get to work:

        Flux + CogVideoX image-to-video (ComfyUI)

        DownloadAndLoadCogVideoModel
        The deprecation tuple (“output_type==’np’”, ‘0.33.0’, “get_3d_sincos_pos_embed uses torch and supports device. from_numpy is no longer required. Pass `output_type=’pt’ to use the new version now.”) should be removed since diffusers’ version 0.33.1 is >= 0.33.0

        It appears my diffusers’ version of 0.33.1 is too new for the version developed for at 0.33.0 – Fancy that!

         

        Canny ControlNet for Flux (ComfyUI) 

        Just falls over.

        I’ll keep adding as I come across them on this thread, hopefully this will help others after me on a sanity check.

        As far as I can work out it’s to do with  CUDA 12.8 (cu128) and Pytorch 2.7.1 around sm_120 for the 5090. The node (etc.. anything around extensions) being used must be updated for this version update. So it’s only time for other developers’ to update there app’s / nodes, if there still doing updates for them. We are  at the whim of the developer of the Node etc… (other extensions) for them to supply updates, unless your good at coding and you can do it yourself.

         

        in reply to: ComfyUI Checkpoint Lottery with Image generation. #17508
        Zandebar
        Participant

          Thank You for the tip, I did just that and that improved the output.

          I finally found out what was causing the issue, I’m using 2 LoRA’s in the workflow and by, bypassing these Lora’s improved the output of the images generated to being outside, this improved to a point where I’m only getting 2% not following the prompt.

          1 issue fixed,  3 more created, LOL! 😉

           

          in reply to: Colab notebooks not loading #16333
          Zandebar
          Participant

            Yeah! I have just signed up (Ngrok) and verified

            Getting this error as per the topic I started as it out of scope for this thread

            CheckpointLoaderSimple
            Error while deserializing header: HeaderTooSmall

            in reply to: Colab notebooks not loading #16331
            Zandebar
            Participant

              Am really having a hard time getting the ComfyUI to load, I’ve now succeeded once after five reloads. Can you offer any help here please.

              in reply to: Colab notebooks not loading #16330
              Zandebar
              Participant

                This now appears to be working, I shutdown the notebook and reloaded it and it came up as it should, so just a glitch I assume.  It loaded in just a couple of mins with the spinning circle, so I’m sorted for now.

                Generally how long should you  wait for it to load before giving up?

                in reply to: Colab notebooks not loading #16329
                Zandebar
                Participant

                  Getting Colab Comfy Working for the first time isn’t working for me, I’ve loaded the script in the Colab environment, the Tunnel Password I’ve entered and clicked on the link and it has just been spinning in my browser for 30 mins and using compute time in Colab.

                  I’ve loaded the defaults…

                  in reply to: Getting Colab Automatic1111 Working for the first time #16327
                  Zandebar
                  Participant

                    Colab pro subscription is what I did have then wasn’t sure what would happen at billing, will have to see. I’ll resubscribe and see I’ll do that now…..

                     

                    You are subscribed to Colab Pro. Learn more
                    Available: 181.51 compute units
                    Usage rate: approximately 0 per hour
                    You have 0 active sessions.

                     

                    That seems to have done it, will have to see what happens when it rolls over next month if it just adds it, thats presuming I have any units left.  Now that I’ve learnt what things do in ComfyUI I’ll be experimenting more and getting to know SD when I need more than 8gb of vram.

                     

                    The confusing bit was I asked Gemini what would happen and it advised that the compute units wouldn’t be added, am good for now.

                     

                    Thank You for your response..

                    in reply to: Getting Colab Automatic1111 Working for the first time #16323
                    Zandebar
                    Participant

                      Hello

                      Does anyone know how exactly Colab billing works, as I’ve read that any remaining compute units will rollover and stay valid for 3 months. Then I’ve read that I will receive another 100 compute units at next billing, this part gets a little confusing.  As it appears that any remaining units are not taken in to account,  I’ve only used 18.49 compute units this month leaving 81.51 compute units left.  As per their information at next billing my compute units will goto 100,  whereby not adding 100 compute units to my remaining units totalling 181.51 units.

                      I’ve now cancelled Colab so I don’t lose any compute units and my remaining units are still in the account, can anyone expand on this.  As it looks like Google Colab are stealing remaining compute units from any unused units no matter how many compute units are left, at the end of each billing cycle. Just refreshing the total to 100 compute units at the start of each billing cycle, am sure I’ve got this wrong as Google wouldn’t do that, would they. As those remaining units are bought and paid for.

                      in reply to: Include rsources needed in course intro #15984
                      Zandebar
                      Participant

                        Hello All

                        Andrew: I must 2nd David’s post, as I was having problems moving from a local install to Colab where I needed to know which assets were needed for a Colab session.

                         

                        I suggest you have a quick look up of what’s needed for the completion of course 1-4 on each course at the start. As in course X you need these addon’s installed to be able to complete this course, list what’s needed, then you can check these off on the Colab install to ensure its all correct.

                        https://stable-diffusion-art.com/forums/topic/getting-colab-automatic1111-working-for-the-first-time/

                        I Suggested at the start of each Course, maybe a more granular per section of the course, I also have trouble knowing which models you are using for your examples. You mostly sat which ones but there has been a good few time where you haven’t. I just used one in my list to get past that point, I think David is quite right in suggesting what he did.

                        Something like: Assets needed for course and then per section assets needed for section.

                        Being new to Colab and it’s UI the would for me be a great benefit, until I get used to using the web app on an online server.

                         

                        Your response to my suggestion was:

                        Thanks for the suggestions.

                        When I wrote the course, I tried to keep it agnostic to how to use A1111 (local/colab/online service). So I didn’t write much about instructions specific to the Colab notebook. I will add them to lessons that require additional extensions.

                         

                        I see where you are coming from, but that doesn’t help when you do a course over multiple days and need to get setup each time for that course. The local install is less important as you set things up along the way, so on that front I’ve not had an issue. The issue raised is Colab whereby you have to setup the installation each time you start the service. So multiple days (on course) or having to restart Colab as David pointed out to add assets which you didn’t think you needed at start, doesn’t help the student.

                        Am I right in saying that the amount of assets which get loaded up into your Colab session affects your compute credits. So needlessly having to restart a session to add assets, has a counter affect on your compute credits?

                        I’ve had Colab now for 2-3 weeks and I’ve avoided using Colab vs local install, because I don’t know which assets to add in the Colab session for the course. The local install just picks up where you last left off in that section of the course, but Colab doesn’t do that. Out of my 100 compute credits I’ve only used 15 and that was just playing around with the Colab setup.

                         

                        In my view David is quite CORRECT  in pointing this out.

                        in reply to: Getting Colab Automatic1111 Working for the first time #15905
                        Zandebar
                        Participant

                          Am I right in saying that in one of the A1111 courses you said its better to upscale x2 in HiRes then take it to the Extras tab.

                          I was just surprised that my RTX 2070 handled 4x and Colab didn’t, surely its a config issue….

                          in reply to: Local Install vs GPU / Render Farms (online GPU) #15904
                          Zandebar
                          Participant

                            I’ve decided that I’m going to hang on upgrading the GPU for when Nvidia launch the RTX 50 Series, and see how they perform. Given the stats it should be impressive, only time will tell,  also the tech issues they’ve had in production. I’m just worried that it may leak into an Intel type issue with the hardware, 12 months after release should be enough time to work this out. Then your 12 months behind, I was kind of hoping to pick up a 4090 in my budget after the launch but that’s looking unlikely as they’ve reduced available stock. Which is keeping the price the same, I’ve been looking in the Black Friday sales and prices remain the same. Which is a bit of a surprize given that the launch of the 50 series is well known, they’ve handled that well to maintain the price.

                             

                            It’s looking most likely that I’ll end up with a RTX ??80 series of some description with 16GB, either 40 0r maybe 50 series, I may spring the extra cash and get the 5080 when it comes out. After the cards have been reviewed and then I’ll work out which way I’m going to jump.

                            That’s why I’m really interested on the limitations and when I’ll need to use a  GPU server farm, maybe a better way to go in the long run. Reading around; the FLUX models can completely fill up the VRAM on consumer cards so I’m considering maybe it’s GPU server farms only.  I do need a performance lift with my present hardware GPU, it’s a matter of working out the pro’s and con’s.

                             

                            I still don’t know where I’m heading with SD I have to work that out first.

                             

                             

                            in reply to: 30 Days Using Stable Diffusion Art with Scholar Monthly #15902
                            Zandebar
                            Participant

                              No Problem!!

                              Your doing something great with this website and I can see that you have put a lot of time and effort into making this educational site what it is.  It’s also a great time saver for me, as it’s all laid out nicely and easy to understand no effort is needed on my side. Which is why its worth the fee and you deserve to be financially compensated for your efforts and the tools you use to get everything up and running.

                               

                              Combine that with a day job and you have true PASSION! 😉

                              • This reply was modified 8 months, 2 weeks ago by Zandebar.
                              in reply to: Local Install vs GPU / Render Farms (online GPU) #15892
                              Zandebar
                              Participant

                                That does add an extra layer of complication to what I’m trying to work out.

                                It looks like I’m going to need some time to get my head around the GPU issue and what it can or can’t do at a certain VRAM.

                                I just need to work out lets say, @ 16Gb when would I need to use an online GPU service to render a certain model, surely I should be able to look at a model and say OK that model won’t work on this local GPU. Therefore,  if I want to use that model I’ll need to use an online GPU service. With that explanation it’s not so straight forward and its just a matter of when the GPU will crash and give you an error. Surely that’s not the case is it? As you should be able to apply some logic somewhere.

                                in reply to: Getting Colab Automatic1111 Working for the first time #15891
                                Zandebar
                                Participant

                                  Please see above post as I couldn’t edit it to add this.

                                   

                                  OK!

                                   

                                  I’ve just repeated what I did in Colab on my local machine which has a RTX 2070 super 8GB VRAM card installed. Which completed the task and produced an image, rendered in correctly but never the less completed the task.

                                  My RTX 2070 super 8GB shouldn’t be able to out perform a 40GB VRAM A100 the Colab used out of the box settings. Which leads me to think that this is a config issue rather than a card issue. I’m a little bit lost for word here, so I need to know what’s going on here.

                                   

                                  Settings used on the RTX 2070 super 8GB to complete the image.

                                  AS-YoungV2, futuristic, Full shot of a young woman, portrait of beautiful woman, solo, pliant, Side Part, Mint Green hair, wearing a red Quantum Dot Reindeer Antler Headpiece outfit, wires, christmas onyx, green neon lights, cyberpunkai, in a Hydroponic mistletoe gardens in futuristic homes with a Robotically animated Christmas displays in public spaces. ambience, shot with a Mamiya Leaf, Fujifilm XF 16mm f/2.8 R WR lens, ISO 6400, f/1.2, Fujifilm Superia X-Tra 400, , (high detailed skin:1.2), 8k uhd, dsir, soft lighting, high quality, film grain,
                                  Negative prompt: BadDream, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, 2d, 3d, illustration, sketch, nfsw, nude
                                  Steps: 30, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 4093592187, Size: 512×768, Model hash: 879db523c3, Model: dreamshaper_8, Denoising strength: 0.7, Hires upscale: 4, Hires upscaler: Latent, Version: v1.10.1
                                  Time taken: 8 min. 45.3 sec.

                                Viewing 15 posts - 1 through 15 (of 23 total)