Forum Replies Created
-
AuthorPosts
-
November 17, 2024 at 9:22 am in reply to: Getting Colab Automatic1111 Working for the first time #15871
Hi Andrew
Hope you are keeping well..
I’ve got Colab working now with A1111, sorry for the delay in getting back to you.
It seems that the notebook couldn’t connect to your Google Drive. Do you have a Google account? If so, it should ask you to grant permission to access your Google Drive. This is necessary for saving images and accessing models.
Yes.. I have a Google account.
I too saw that Google drive couldn’t connect to my Google drive space from the error message and your exactly right when it came to permissions. I was rushing as I didn’t have a lot of time and was quickly trying to set it up and have a look. I didn’t read the script line for line, so I wasn’t going to hit the select all button without knowing what it was doing in the background (I just selected a few). As you know in computing you grant the least amount of permissions needed to get the job done, Linux approach not the MS one ;-).
After reading the script I was more confident to select the ones needed to get the required access to the drive, I left two out.
I suggest you put an image in the instructions for the least amount of permissions needed to connect to Google Drive to enable A1111 to function in Colab with a Google Drive connection.
Also:
I’ve done part of the courses 1-3 (part of 3)Â in A1111 on a local install, now am trying Colab am now working out what add-ons I need to complete the rest of the course.
I suggest you have a quick look up of what’s needed for the completion of course 1-4 on each course at the start. As in course X you need these addon’s installed to be able to complete this course, list what’s needed, then you can check these off on the Colab install to ensure its all correct.
There are parts in the instructions where you gloss over parts of it without going into the reasoning behind it, such as ngrok I’m coming to this for the first time and don’t know the terminology and the reasons why this would be better. A little more explanation in that area is needed and would improve the experience.
Is the notebook a one time thing or does the setup reside on Google drive for later use, I can see the name of the notebook there. So I’m thinking if I click on it, it may load up the last setup I entered. I’ve just clicked on it and its just a link to Colab, but that would be useful in my opinion to be able to click on a pre-made setup which is stored in your Google drive. Just a thought…
This is a little hazy; but you’ve done a great job of Lets Get You Started, perhaps consider adding a what you need to know about Colab course and enable the user to become a power user of Colab when it comes to generative ai on the Colab platform.
After a quick look and a play on Colab, I think I will get more milage out of a online GPU than a local one, but that’s my bias at this stage, it may change over time. It’s important to learn the basics first and build that foundation first before jumping to any conclusion. It’s always fun discussing what’s out there and looking at other options.
All the best
Zandebar
-
This reply was modified 6 months, 2 weeks ago by
Zandebar.
On the GPU hardware side, I’m having trouble working out what you can do in each VRAM stack of each card.
What are the limitation of each, what can’t you do with 12gb, 16gb, 24gb and 48gb (currently).
How much headroom do you need to give other resources in VRAM other than the model, I’m having a guess at 2gb, I don’t know if that’s right but it looks fair.
So that would potentially mean (am guessing).
- 12gb = 10gb Max model size
- 16gb =14gb Max model size
- 24gb = 22gb Max model size
- 48gb = 46gb Max model size
How large do these models get, then there’s the workflow to consider how does that affect the VRAM?
If you have a look at Flux.1 the information I found @
https://medium.com/@researchgraph/the-ultimate-flux-1-hands-on-guide-067fc053fedd
States:
The regular version requires at least 32GB of system RAM. Testing shows that a 4090 GPU can fully occupy its memory. The dev-fp8 version is recommended for local use.
* I assume when talking about system ram it means VRAM
So what’s the difference of dev-fp8Â to the regular model (this was covered in the courses)
Would the GeForce RTX 4090 function using this model (GB size from Huggingface), we know that the new GeForce RTX 509o with 32gb (reportedly) will be able to handle this.
flux1-dev.safetensors = 23.8GB
flux1-schnell.safetensors = 23.8GB – This is the same size as pro
Also Stable Diffusion 3.5 model
This model fits inside the VRAM of the GeForce RTX 4090 but not the GeForce RTX 4080 Super Ti @ 16GB
stable-diffusion-3.5-large = 16.5 GB
These are the new models coming out and if I’m correct, these are starting to be prohibited for hobbyist / creative enthusiasts who can’t afford high VRAM Flagship GPU’s. By running these models locally, what I’m trying to get at, what can you do with the best card you can afford.
Oops: I’ve made a mistake and I can’t edit
4080 Difference: +9.3% with the 5080
Hello
Great and Thank You! You kind of confirmed what I was thinking.
Right: I have a bottle neck, I’m based in the UK I only have a £1000 GBP to spend on a GPU, I’m a hobbyist and will not be making any money from this to justify the expense and the outlay. However I’m also not sure where I’ll be going with this so I’m looking for a hybrid solution to GPU needs.
Let’s get this straight; in 12 months time when you get the RTX 5090 with 32GB VRAM, you’ll be saying Wow at the speed and recommending 32gb VRAM and not 24gb, when asked the very same question.
Granted if your a pro then you’ll need the FLAGSHIP option. When your not a pro (like me) justifying the expense becomes hard when your on a tight budget and you have household bills to pay, I can only dream of owning the latest and greatest GPU. There is a compromise a cheaper option or rent a GPU from a render farm, I’m actually looking at both at the moment.
NVidia GeForce RTX 4080 Super Ti 16gb VRAMÂ (I can afford right now), I’ll be able to learn SD and do a fair bit with 16GB VRAM. When I hit that wall and need extra VRAM I’ll out source the GPU to a render farm, with the render farm option I’ll just pay for what I use. This isn’t a good place to be really with the new RTX5000 series coming out, where you were only 2 thirds of the max VRAM the the 5000 series comes out your half the max size. Where the model size will only get bigger, I was bouncing around and saw a model size (flux) 14GB. Ouch! not much room for everything else that gets stored in VRAM. Chance are this size model would work in 16GB VRAM, and its only going to get bigger. We know that because of the increase of VRAM in the 5000 series, if you make more space people will fill more space. You can’t win being a hobbyist.
I was also thinking, wait long enough the 3090 may fit in my budget:
Do you get this craziness where you are?
EVGA GeForce RTX 3090 Ti FTW3 ULTRA GAMING, 24G-P5-4985-KR, 24GB GDDR6X, iCX3, ARGB LED, Backplate, Free eLeash
£2,094.22
MSI GeForce RTX 4090 VENTUS 3X E 24G OC Gaming Graphics Card – 24GB GDDR6X, 2550 MHz, PCI Express Gen 4, 384-bit, 2x DP v 1.4a, HDMI 2.1a (Supports 4K & 8K HDR)
£1,749.99
GIGABYTE GeForce RTX 4090 GAMING OC 24GB Graphics Card – 24GB GDDR6X, PCI-E 4.0, Core 2535Mhz, RGB fusion, Anti-sag bracket, Metal back plate, DP 1.4, HDMI 2.1a, NVIDIA DLSS 3, GV-N4090GAMING OC-24GD
£1,899.00
Where the 4090 is cheaper than the 3090 CRAZY! OK the 3090 is not a toaster like the 4090 with the power socket issue. But still you would of thought there’ll be some rest bite for us hobbyist with an older series of card, Nah!! So where stuck at the next generation down, the 4080 ti super.
And wait for it, Nvidia are not doing themselves any favours with the next generation of cards now that they have no competition. Look at this…
RTX 5080
TDP: 350W
GPU Name: GB203
GPCs: 7
TPCs: 42
SMs: 84
Cores: 10752Tensor Cores: (likely) 384 (half the number of RTX 5090)
Memory Configuration: 256-bit GDDR7 (16GB VRAM)Boost clock speed around 2.8 GHz
RTX 4080
Architecture: Ada Lovelace
Process node: 4nm TSMC
CUDA cores: 9,728
Ray tracing cores: 76
Tensor cores: 304
Base clock speed: 2,205 MHz
Maximum clock speed: 2,505 MHz
Memory size: 16GB GDDR6X
Memory speed: 21 Gbps
Bus width: 256-bit
Bandwidth: 912 GBps
TBP: 320W4080 Difference: +9.3% with the 5090
NVidia have got there head some where I can’t say here, but logically with the uplift in performance of the 5090, you would have thought a shift in the other models.
GeForce RTX 5000 to resemble something like this in vram: 12gb (5060), 16gb (5070), 24gb (5080) and 32gb (5090)
And the CUDA core count is not much higher, would have thought they’ll match the 4090 cores with the 5080. Core count @10752 you would have thought they’ll match at @16384 CUDA Cores. Given that’s its rumoured that the 5090 is having 21,760 CUDA cores. And the Tensor cores have dropped, maybe a good reason there but out of my scope.
Logically that makes more sense, it just leaves us users of the products’ frustrated, plus if the 5080 with imaginary 24gb VRAM and 16384 CUDA Cores. This would almost match the 4090 and cause a price drop of remining units of 4090. Everyone wins, but NO…
That’s why am waiting to see what the market does and see if these rumoured specs are true, and make a decision then. Either way the consumer is going to be at a dis-advantage give Nvidia previous history.
In the meantime: Checking out GPU farms and what they can offer is looking like a good idea and could in principle be more beneficial. That’s out of scope for this thread, I’ll make one on GPU farms…
Kind Regards
Zandebar
November 11, 2024 at 9:55 am in reply to: Getting Colab Automatic1111 Working for the first time #15801Yes I can help you sort this
November 11, 2024 at 9:55 am in reply to: Getting Colab Automatic1111 Working for the first time #15800Yes I can help you sort this
-
This reply was modified 6 months, 2 weeks ago by
-
AuthorPosts