This is a step-by-step guide for using the Google Colab notebook in the Quick Start Guide to run AUTOMATIC1111. This is one of the easiest ways to use AUTOMATIC1111 because you don’t need to deal with the installation.
See installation instructions on Windows PC and Mac if you prefer to run locally.
This notebook is designed to share models in Google Drive with the following notebooks.
Google has blocked usage of Stable Diffusion with a free Colab account. You need a paid plan to use this notebook.
Table of Contents
- What is AUTOMATIC1111?
- What is Google Colab?
- Alternatives
- Step-by-step instructions to run the Colab notebook
- ngrok (Optional)
- When you are done
- Computing resources and compute units
- Models available
- Other models
- Installing models
- Installing extensions from URL
- Extra arguments to webui
- Version
- Extensions
- Frequently asked questions
- Next Step
What is AUTOMATIC1111?
Stable Diffusion is a machine-learning model. It is not very user-friendly by itself. You need to write codes to use it. Most users use a GUI (Graphical User Interface). Instead of writing codes, we write prompts in a text box and click buttons to generate images.
AUTOMATIC1111 is one of the first Stable Diffusion GUIs developed. It supports standard AI functions like text-to-image, image-to-image, upscaling, ControlNet, and even training models (although I won’t recommend it).
What is Google Colab?
Google Colab (Google Colaboratory) is an interactive computing service offered by Google. It is a Jupyter Notebook environment that allows you to execute code.
They have three paid plans – Pay As You Go, Colab Pro, and Colab Pro+. You need the Pro or Pro+ Plan to use all the models. I recommend using the Colab Pro plan. It gives you 100 compute units per month, which is about 50 hours per standard GPU. (It’s a steal)
With a paid plan, you have the option to use Premium GPU. It is an A100 processor. That comes in handy when you need to train Dreambooth models fast.
When you use Colab for AUTOMATIC1111, be sure to disconnect and shut down the notebook when you are done. It will consume compute units when the notebook is kept open.
You will need to sign up with one of the plans to use the Stable Diffusion Colab notebook. They have blocked the free usage of AUTOMATIC1111.
Alternatives
Think Diffusion provides fully managed AUTOMATIC1111/Forge/ComfyUI web service. They cost a bit more than Colab but save you from the trouble of installing models and extensions and faster startup time. They offer 20% extra credit to our readers. (Affiliate link)
Step-by-step instructions to run the Colab notebook
Step 0. Sign up for a Colab Pro or a Pro+ plans. (I use Colab Pro.)
Step 1. Open the Colab notebook in Quick Start Guide. You should see the notebook with the second cell below.
Step 2. Set the username and password. You will need to enter them before using AUTOMATIC11111.
Step 3. Check the models you want to load. If you are a first-time user, you can use the default settings.
Step 4. Click the Play button on the left of the cell to start.
Step 5. It will install A1111 and models in the the Colab envirnoment.
Step 6. Follow the gradio.live
link to start AUTOMATIC1111.
Step 7. Enter the username and password you specified in the notebook.
Step 8. You should see the AUTOMATIC1111 GUI after you log in.
Put in “a cat” in the prompt text box and press Generate to test using Stable Diffusion. You should see it generates an image of a cat.
ngrok (Optional)
If you run into display issues with the GUI, you can try using ngrok instead of Gradio to establish the public connection. It is a more stable alternative to the default gradio connection.
You will need to set up a free account and get an authoken.
- Go to https://ngrok.com/
- Create an account
- Verify email
- Copy the authoken from https://dashboard.ngrok.com/get-started/your-authtoken and paste in the ngrok field in the notebook.
The Stable Diffusion cell in the notebook should look like below after you put in your ngrok authtoken.
Click the play button on the left to start running. When it is done loading, you will see a link to ngrok.io in the output under the cell. Click the ngrok.io link to start AUTOMATIC1111. The first link in the example output below is the ngrok.io link.
When you visit the ngrok link, it should show a message like below
Click on Visit Site to Start AUOTMATIC1111 GUI. Occasionally, you will see a warning message that the site is unsafe to visit. It is likely because someone used the same ngrok link to put up something malicious. Since you are the one who created this link, you can ignore the safety warning and proceed.
When you are done
When you finish using the notebook, don’t forget to click “Disconnect and delete runtime” in the top right drop-down menu. Otherwise, you will continue to consume compute credits.
Computing resources and compute units
To view computing resources and credits, click the downward caret next to the runtime type (E.g. T4, High RAM) on the top right. You will see the remaining compute units and usage rate.
Models available
For your convenience, the notebook has options to load some popular models. You will find a brief description of them in this section.
v1.5 models
v1.5 model
v1.5 model is released after 1.4. It is the last v1 model. Images from this model is very similar to v1.4. You can treat the v1.5 model as the default v1 base model.
v1.5 inpainting model
The official v1.5 model trained for inpainting.
Realistic Vision
Realistic Vision v2 is good for generating anything realistic, whether they are people, objects, or scenes.
F222
F222 is good at generating photo-realistic images. It is good at generating females with correct anatomy.
Caution: F222 is prone to generating explicit images. Suppress explicit images with a prompt “dress” or a negative prompt “nude”.
Dreamshaper
Dreamshaper is easy to use and good at generating a popular photorealistic illustration style. It is an easy way to “cheat” and get good images without a good prompt!
Open Journey Model
Open Journey is a model fine-tuned with images generated by Mid Journey v4. It has a different aesthetic and is a good general-purpose model.
Triggering keyword: mdjrny-v4 style
Anything v3
Anything V3 is a special-purpose model trained to produce high-quality anime-style images. You can use danbooru tags (like 1girl, white hair) in the text prompt.
It’s useful for casting celebrities to amine style, which can then be blended seamlessly with illustrative elements.
Inkpunk Diffusion
Inkpunk Diffusion is a Dreambooth-trained model with a very distinct illustration style.
Use keyword: nvinkpunk
v2 models
v2 models are the newest base models released by Stability AI. It is generally harder to use and is not recommended for beginners.
v2.1 768 model
The v2.1-768 model is the latest high-resolution v2 model. The native resolution is 768×768 pixels. Make sure to set at least one side of the image to 768 pixels. It is imperative to use negative prompts in v2 models.
You will need Colab Pro to use this model because it needs a high RAM instance.
v2 depth model
v2 depth model extracts depth information from an input image and uses it to guide image generation. See the tutorial on depth-to-image.
SDXL model
This Coalb notebook supports SDXL 1.0 base and refiner models.
Select SDXL_1
to load the SDXL 1.0 model.
Important: Don’t use VAE from v1 models. Go to Settings > Stable Diffusion. Set SD VAE to AUTOMATIC or None.
Check out some SDXL prompts to get started.
Other models
Here are some models that you may be interested in.
See more realistic models here.
Dreamlike Photoreal
Dreamlike Photoreal Model Page
Model download URL
https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors
Dreamlike Photoreal model is good at generating beautiful females with correct anatomy. It is similar to F222.
triggering keyword: photo
Caution: This model is prone to generating explicit photos. Suppress explicit images with a prompt “dress” or a negative prompt “nude”.
Lyriel
Lyriel excels in artistic style and is good at rendering a variety of subjects, ranging from portraits to objects.
Model download URL:
https://civitai.com/api/download/models/50127
Deliberate v2
Deliberate v2 is a well-trained model capable of generating photorealistic illustrations, anime, and more.
Installing models
There are two ways to install models that are not on the model selection list.
- Use the
Checkpoint_models_from_URL
andLoRA_models_from_URL
fields. - Put model files in your Google Drive.
Install models using URLs
You can only install checkpoint or LoRA models using this method.
Put in the download URL links in the field. The link you initiate the file download when you visit it in your browser.
Checkpoint_models_from_URL
: Use this field for checkpoint models.LoRA_models_from_URL
: Use this field for LoRA models.
Some models on CivitAi needs an API key to download. Go to the account page on CivitAI to create a key and put it in Civitai_API_Key
.
Below is example of getting the download link on CivitAI.
Put it in the Model_from_URL field.
Installing models in Google Drive
After running the notebook for the first time, you should see the folder AI_PICS > models created in your Google Drive. The folder structure inside this folder mirror AUTOMATIC1111‘s and is designed to share models with other notebooks from this site.
Put your model files in the corresponding folder. For example,
- Put checkpoint model files in AI_PICS > models > Stable-diffusion.
- Put LoRA model files in AI_PICS > models > Lora.
You will need to restart the notebook to see the new models.
Installing extensions from URL
You can install any number of extensions by using this field. You will need the URL of the Github page of the extension.
For example, put in the following if you want to install the Civitai model extension.
https://github.com/civitai/sd_civitai_extension
You can also install multiple extensions. The URLs need to be separated with commas. For example, the following URLs install the Civitai and the multi-diffusion extensions.
https://github.com/civitai/sd_civitai_extension,https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111
Extra arguments to webui
You can add extra arguments to the Web-UI by using the Extra_arguments
field.
For example, if you use the lycoris extension, it is handy to use the extra webui argument --lyco-dir
to specify a custom lycoris model directory in your Google Drive.
Other useful arguments are
--api
. Allow API access. Useful for some applications, e.g. the PhotoShop Automatic1111 plugin.
Version
Now you can specify the version of Stable Diffusion WebUI you want to load. Use this at your own risk, as I only test the version saved.
Notes on some versions
v1.6.0
: You need to add--disable-model-loading-ram-optimization
in the Extra_arguments field.
Extensions
ControlNet
ControlNet is a Stable Diffusion extension that can copy the composition and pose of the input image and more. ControlNet has taken the Stable Diffusion community by storm because there is so much you can do with it. Here are some examples
This notebook supports ControlNet. See the tutorial article.
You can put your custom ControlNet models in AI_PICS/ControlNet folder.
Deforum – Making Videos using Stable Diffusion
You can make videos with text prompts using the Deforum extension. See this tutorial for a walkthrough.
Regional Prompter
Regional prompter lets you use different prompts for different regions of the image. It is a valuable extension for controlling the composition and placement of objects.
After Detailer
After Detailer (!adetailer) extension fixes faces and hands automatically when you generate images.
Openpose editor
Openpose editor is an extension that lets you edit the openpose control image. It is useful for manipulating the pose of an image generation with ControlNet. It is used with ControlNet.
AnimateDiff
AnimateDiff lets you create short videos from a text prompt. You can use any Stable Diffusion model and LoRA. Follow this tutorial to learn how to use it.
text2video
Text2video lets you create short videos from a text prompt using a model called Modelscope. Follow this tutorial to learn how to use it.
Infinite Image Browser
The Infinite Image Browser extension lets you manage your generations right in the A1111 interface. The secret key is SDA
.
Frequently asked questions
Do I need a paid Colab account to use the notebook?
Yes, you need a Colab Pro or Pro+ account to use this notebook. Google has blocked the free usage of Stable Diffusion.
Is there any alternative to Google Colab?
Yes, Think Diffusion provides fully managed AUTOMATIC1111/Forge/ WebUI online as a web service. They offer 20% extra credit to our readers. (Affiliate link)
Do I need to use ngrok?
You don’t need to use ngrok to use the Colab notebook. In my experience, ngrok provides a more stable connection between your browser and the GUI. If you experience issues like buttons not responding, you should try ngrok.
What is the password for the Infinite Image Browser?
SDA
Why do I keep getting disconnected?
Two possible reasons:
- There’s a human verification shortly after starting each Colab notebook session. You will get disconnected if you do not respond to it. Make sure to switch back to the Colab notebook and check for verification.
- You are using a free account. Google has blocked A1111 in Colab. Get Colab Pro.
Can I use the dreambooth models I trained?
Yes, put the model file in the corresponding folder in Google Drive.
- Checkpoint models: AI_PICS > models > Stable-diffusion.
- LoRA models: AI_PICS > models > Lora.
How to enable API?
You can use AUMATIC1111 as an API server. Add the following to Extra Web-UI arguments.
--api
The server’s URL is the same as the one you access the Web-UI. (i.e. the gradio or ngrok link)
Why do my SDXL images look garbled?
Check to make sure you are not using a VAE from v1 models. Check Settings > Stable Diffusion > SD VAE. Set it to None or Automatic.
Next Step
If you are new to Stable Diffusion, check out the Absolute beginner’s guide.
LoRA_models_from_URL field isn’t present in the notebook either
Putting LORAs on the AI_PICS > models > Lora doesn’t make them show up, even after reload 🤔
Hi, I just tested the notebook and the loading LoRA in Google Drive is working correctly.
1. Try press the Refresh button on the LoRA tab.
2. A1111 only shows LoRAs that are compatible with the checkpoint model. E.g. Select an XL checkpoint -> Refresh the LoRA tab to show XL LoRAs.
The Save_In_Google_Drive option is gone in the latest version of the notebook.
It is removed. You can access the old version which I no longer maintain. https://stable-diffusion-art.com/legacy-A1111-notebook
Would you ever consider doing a notebook as UI-friendly as yours but with ComfyUI? People are migrating towards it and still am unable to find a Colab notebook as clear as yours.
I managed to run ‘Comfy’ UI with a Colab notebook. The problem is that this is the most ironic name for anything ever. Comfy like a maze made of burning chainsaws.
I have one but its not as well written as this one. I will think about it given that A1111 is not catching up with the latest tech.
+1 for the request for a Comfy notebook. I’m willing to pay extra for a ComfyUI notebook from yours, Andrew.
@Bjørn, what notebook do you use?
OK I will think about it 🙂
Hi Andrew,
Thank you for the Colab. I’m grateful to be using it.
Recently, I’ve encountered a couple of issues while using this Colab:
1. When I use NGROK, I get an error stating that my request has been refused. Are we still able to use NGROK?
2. When I use Gradio, the page displays an error saying, “This site can’t be reached.” I’m wondering if there’s an issue with Gradio.
Andrew, do you have any idea what might be causing these issues? Thank you for your help.
Hi, I just ran with Gradio and it is working correctly. Perhaps it is a temp issue. The need for ngrok is a lot less nowadays. I recommend using it only when gradio is not available.
Save_In_Google_Drive Everything mode has stopped working for me with the A100. I’ve been able to use it on a regular basis up until around 4 days ago. I’m not sure what changed in that time, but I’ve tried every day since then with no luck, both with gradio and ngrok. T4 still works but I find it much too slow for SDXL, which is why i subscribe for colab Pro+. There are never any error messages or warnings in either the UI or the colab log. The UI boots up and I can access it just fine, I can change & save settings but am unable to actually generate any images or even view/refresh my LoRAs. I click the generate button and the buttons change to the typical Interrupt|Skip buttons, but nothing happens and it just acts like it’s stuck before it even says 0% progress. There is no additional output in the colab log when I do this either, the most recent lines on there are just the startup messages about applying attention optimization, embedding loading, and model loading in however many seconds.
I get the same sort of issue when i try to view or refresh my LoRAs before even trying to generate an image, it acts like it’s starting to refresh but then just gets stuck in some sort of infinite loading/processing.
Do you have any advice?
The save everything setting is problematic due to inherent working of colab. You can try starting a new AI_PICS folder (e.g. AI_PICS2) to see if the problem can be resolved. Otherwise, use the default save setting.
If I use the default save setting, will I have to re-install/re-download my checkpoints, embeddings, and loras every time I start up?
You may need to move the model files to different folders in G drive. See this post for folder locations. You can switch to default to see if you still see the models and whether it resolves the issue.
Share link not created.
V1.9.0. Selected AnimateDiff and ControlNet. It seems this is is since 27/Apr as comments below.
>>>>>>
Running on local URL: http://127.0.0.1:7860
Interrupted with signal 2 in
Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.
Startup time: 267.1s (prepare environment: 52.7s, import torch: 4.6s, import gradio: 1.0s, setup paths: 3.6s, initialize shared: 0.3s, other imports: 0.7s, list SD models: 0.5s, load scripts: 19.3s, create ui: 18.9s, gradio launch: 165.5s).
Looks like it was a temp issue.
Hello, Andrew. First of all, I would like to say thanks for your Colab work! Been actively using them without much issues~
Just a heads up, today, gradio has an issue where the xxxxxx.gradio.live link would not appear, only the local URL which is non-functional as expected.
Apply lowram patch
/content/stable-diffusion-webui
WEBUI ARGUMENTS: –gradio-img2img-tool color-sketch –enable-insecure-extension-access –gradio-queue –share –gradio-auth “a:a” –disable-model-loading-ram-optimization –opt-sdp-attention –medvram-sdxl
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Launching Web UI with arguments: –gradio-img2img-tool color-sketch –enable-insecure-extension-access –gradio-queue –share –gradio-auth a:a –disable-model-loading-ram-optimization –opt-sdp-attention –medvram-sdxl
2024-04-27 15:26:50.920974: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-27 15:26:50.921028: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-27 15:26:50.922414: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-04-27 15:26:52.248896: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
no module ‘xformers’. Processing without…
no module ‘xformers’. Processing without…
No module ‘xformers’. Proceeding without it.
[-] ADetailer initialized. version: 24.4.2, num models: 10
Checkpoint sweetMix_v22Flat.safetensors [83326ee94a] not found; loading fallback aurora_v10.safetensors [1b5f8211ec]
Loading weights [1b5f8211ec] from /content/stable-diffusion-webui/models/Stable-diffusion/aurora_v10.safetensors
Running on local URL: http://127.0.0.1:7860
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
Loading VAE weights specified in settings: /content/stable-diffusion-webui/models/VAE/blessed2.vae.pt
Applying attention optimization: sdp… done.
Model loaded in 7.1s (load weights from disk: 0.8s, create model: 2.4s, apply weights to model: 2.6s, load VAE: 0.5s, load textual inversion embeddings: 0.5s, calculate empty prompt: 0.2s).
ngrok works fine for now.
I just realised this after letting the colab run for a while:
Model loaded in 52.2s (calculate hash: 31.3s, load weights from disk: 0.4s, create model: 3.5s, apply weights to model: 2.6s, load VAE: 7.5s, load textual inversion embeddings: 6.3s, calculate empty prompt: 0.5s).
Interrupted with signal 2 in
Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2.
Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps:
1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio
Startup time: 217.8s (prepare environment: 2.1s, import torch: 4.6s, import gradio: 1.0s, setup paths: 3.4s, initialize shared: 1.0s, other imports: 1.1s, list SD models: 4.2s, load scripts: 22.6s, create ui: 1.2s, gradio launch: 176.4s).
Maybe that’s the reason why gradio links are not appearing anymore?
Hi, gradio is working now. Its likely a temporary issue.
Hello i have error “HTTP Requests exceeded” in ngrok…
And from localhost SD not runnig “ERR_CONNECTION_REFUSED” when i try connect from colab.
Can you give advice with it?
Interesting… ngrok is normally not needed nowadays. You can try without.
Yes… but if i set empty in NGROK. I can not connect to SD colab…
The gradio link shows up now. It was a temp issue.
Hey,
The animatediff is not working. I’ve got this error:
*** Error calling: /content/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py/ui
Traceback (most recent call last):
File “/content/stable-diffusion-webui/modules/scripts.py”, line 547, in wrap_call
return func(*args, **kwargs)
File “/content/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py”, line 43, in ui
from scripts.animatediff_mm import mm_animatediff as motion_module
ModuleNotFoundError: No module named ‘scripts.animatediff_mm’
—
*** Error calling: /content/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py/ui
Traceback (most recent call last):
File “/content/stable-diffusion-webui/modules/scripts.py”, line 547, in wrap_call
return func(*args, **kwargs)
File “/content/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py”, line 43, in ui
from scripts.animatediff_mm import mm_animatediff as motion_module
ModuleNotFoundError: No module named ‘scripts.animatediff_mm’