How to run Stable Diffusion on Google Colab (AUTOMATIC1111)

Updated Categorized as Tutorial Tagged , 41 Comments on How to run Stable Diffusion on Google Colab (AUTOMATIC1111)
How to use Stable Diffusion in Google Colab

This is a step-by-step guide for using the Google Colab notebook in the Quick Start Guide to run AUTOMATIC1111. This is one of the easiest ways to use AUTOMATIC1111 because you don’t need to deal with installation.

See install instructions on Windows PC and Mac if you prefer to run locally.

What is AUTOMATIC1111?

You should know what AUTOMATIC1111 is if you want to be a serious user of Stable Diffusion. You can choose not to use it. But you need to know what it can do because it is the gold standard in features, though not necessarily stability…

Stable Diffusion is a machine-learning model. By itself is not very user-friendly. You will need to write codes to use it. It’s a hassle. Most users use a GUI (Graphical User Interface) to use Stable Diffusion. Instead of writing codes, we write prompts in a text box and click buttons to generate images.

AUTOMATIC1111 was one of the first GUIs developed for Stable Diffusion. Although it associates with AUTOMATIC1111’s GitHub account, it has been a community effort to develop this software.

AUTOMATIC1111 is feature-rich: You can use text-to-image, image-to-image, upscaling, depth-to-image, and run and train custom models all within this GUI. Many of the tutorials on this site are demonstrated with this GUI.

What is Google Colab?

Google Colab is an interactive computing service offered by Google. You can use it for free with a Google account. But you may get disconnected during busy hours or have been using too much lately.

They have three paid plans – Pay As You Go, Colab Pro, and Colab Pro+. If you decide to pay, I recommend using the Colab Pro plan. It gives you 100 compute units per month which are about 50 hours on a standard GPU. (It’s a steal) You can also get high-RAM machines, which are useful for some v2 models and other conveniences.

With a paid plan, you have the option to use Premium GPU. It is an A100 processor. That comes in handy when you need to train Dreambooth models fast.

If you use Colab for AUTOMATIC1111, be sure to disconnect and shut down the notebook when you are done. It will consume compute units when the notebook is kept open.

Step-by-step instructions to run the Colab notebook

Step 1. Open the Colab notebook in Quick Start Guide. You should see the notebook with the second cell like below.

Step 2. Review username and password. You will need the credential after you start AUTOMATIC11111.

Step 3. Review Save_In_Google_Drive option. Three options are available.

  1. Everything: Save the whole AUTOMATIC1111 Stable Diffusion webui in your Google Drive. This option requires more maintenance. Recommended for advanced users only.
  2. Models, images and settings: Saves Lora models, Embeddings, GUI Settings and all image in your Google Drive. It will load the followings from your Google Drive

3. Nothing: Will not use your Google Drive. All data and images will be deleted after you disconnect.

You must grant permission to access Google Drive if you choose the first or the second options.

Step 4. Check the models you want to load. Currently we offer v1.4, v1.5, v1.5 inpainting, F222, anything v3, inkpunk diffusion, Mo Di diffusion, v2.1-512, v2.1-768 and v2 depth model.

If you are a first-time user, you can select the v1.5 model.

If you chose to save everything in Google Drive, the models will be downloaded to your Google Drive.

Step 6. Click the Play button on the left of the cell to start. It may warn you about needing high RAM if you don’t have a Pro subscription. Ignoring the warning is okay if you don’t use the v2.1 768 px model.

Step 7. Start-up should complete within a few minutes. How long it takes depends on how many models you include. When it is done, you should see the message below.

Step 8. Follow the gradio.live link to start AUTOMATIC1111.

Step 9. Enter the username and password you specified in the notebook.

Step 10. You should see the AUTOMATIC1111 GUI after you log in.

Put in “a cat” in the prompt text box and press Generate to test using Stable Diffusion. You should see it generates an image of a cat.

ngrok (Optional)

If you run into display issues with the GUI, you can try using ngrok instead of Gradio to establish the public connection. It is a more stable alternative to the default gradio connection.

You will need to set up a free account and get an authoken.

  1. Go to https://ngrok.com/
  2. Create an account
  3. Verify email
  4. Copy the authoken from https://dashboard.ngrok.com/get-started/your-authtoken and paste in the ngrok field in the notebook.

The Stable Diffusion cell in the notebook should look like below after you put in your ngrok authtoken.

Click the play button on the left to start running. When it is done loading, you will see a link to  ngrok.io in the output under the cell. Click the ngrok.io link to start AUTOMATIC1111. The first link in the example output below is the ngrok.io link.

When you visit the ngrok link, it should show a message like below

 Click on Visit Site to Start AUOTMATIC1111 GUI. Occasionally, you will see a warning message that the site is unsafe to visit. It is likely because someone used the same ngrok link to put up something malicious. Since you are the one who created this link, you can ignore the safety warning and proceed.

Models available

For your convenience, the notebook has options to load some popular models. You will find a brief description of them in this section.

v1 models

v1.4 model

v1.4 model is the first publicly released Stable Diffusion base model.

v1.5 model

v1.5 model is released after 1.4. It is the last v1 model. Images from this model is very similar to v1.4. You can treat the v1.5 model as the default v1 base model.

v1.5 inpainting model

A special model trained for inpainting.

F222

F222

F222 is good at generating photo-realistic images. It is good at generating females with correct anatomy.

Caution: It is a NSFW model. Suppress explicit images with a prompt “dress” or a negative prompt “nude”.

Dreamshaper

Dreamshaper

Model Page

Dreamshaper is easy to use and good at generating a popular photorealistic illustration style. It is an easy way to “cheat” and get good images without a good prompt!

Open Journey Model

Open Journey Model.

Model Page

Open Journey is a model fine-tuned with images generated by Mid Journey v4. It has a different aesthetic and is a good general-purpose model.

Triggering keyword: mdjrny-v4 style

Anything v3

Anything v3 model.

Model Page

Anything V3 is a special-purpose model trained to produce high-quality anime-style images. You can use danbooru tags (like 1girl, white hair) in the text prompt.

It’s useful for casting celebrities to amine style, which can then be blended seamlessly with illustrative elements.

Inkpunk Diffusion

Inkpunk Diffusion model.

Inkpunk Diffusion is a Dreambooth-trained model with a very distinct illustration style.

Model Page

Use keyword: nvinkpunk

v2 models

v2 models are the newest base models released by Stability AI. It is generally harder to use and is not recommended for beginners.

v2.1 768 model

Sample 2.1 image.

The v2.1-768 model is the latest high-resolution v2 model. The native resolution is 768×768 pixels. Make sure to set at least one side of the image to 768 pixels. It is imperative to use negative prompts in v2 models.

You will need Colab Pro to use this model because it needs a high RAM instance.

v2.1 512 model

The v2.1-512 model is the lower-resolution version of the v2.1 model.

v2 depth model

v2 depth model

v2 depth model extracts depth information from an input image and uses it to guide image generation. See the tutorial on depth-to-image.

Other models

Here are some models that you may be interested in.

Dreamlike Photoreal

Dreamlike Photoreal

Dreamlike Photoreal Model Page

Model download URL

https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors

Dreamlike Photoreal model is good at generating beautiful females with correct anatomy. It is similar to F222.

triggering keyword: photo

Caution: It is a NSFW model. Suppress explicit images with a prompt “dress” or a negative prompt “nude”.

Save in Google Drive – Small models, images and settings

I recommend this to most users. This is designed to save small data files to Google Drive but download big files. So your Google Drive won’t be used up by Stable Diffusion.

You select Small models, images and settings option. The following are saved in your Google Drive.

  • All generated images
  • GUI settings
  • Prompt and parameters used in the last generated image
  • Embeddings (Path: AI_PICS/embeddings)
  • Lora models (Path: AI_PICS/Lora)
  • Upscalers (Path: AI_PICS/ESRGAN)
  • Hypernetworks (Path: AI_PICS/hypernetworks)

Next time you run the notebook, all of the above will be available.

This option will not save any models in your Google Drive. But it will load all the models you put in AI_PICS/models.

You only need to put models you frequent but NOT in the Notebook’s model list in AI_PICS/models. Since model files are large (2 to 7 GB), you don’t want to put too many in your Google Drive. (Free storage of Google Drive is only 15 GB)

Installing embeddings

Embeddings are lightweight files used to modify styles or inject objects. To install embeddings, drag and drop the file to stable-diffusion-webui > embeddings.

Embeddings are reloaded whenever you switch models. You will get a confirmation in the log message on Colab.

Installing LoRA

LoRA (Low-Rank Adaptation) models are small patches that can be applied to the model checkpoints. Their sizes are small, usually between 3-200 MB, making them easy to store. They are good alternatives to models.

To install a LoRA model, drag and drop the model to the directory stable-diffusion-webui > models > Lora in the file explorer panel.

Lora model folder.

The LoRA model will be saved to your Google Drive under AI_PICS > Lora if Use_Google_Drive is selected. You can reuse the model next time if you select the same option.

Alternatively, you can put a Lora model in your Google Drive in AI_PICS > Lora if you use the google drive option. It uploads faster this way.

Installing Upscalers

You can use upscalers in your Google Drive. Just put them in AI_PICS > ESRGAN folder in your Google Drive. Next time when you start the notebook with the Use_Google_Drive option.

Using models in Google Drive

You can use models in your Google Drive. You must put the models in the following default location.

AI_PICS/models

All models within this folder will be loaded during start-up.

Installing hypernetworks

To install hypernetworks, put them in the following location

AI_PICS/hypernetworks

Save in Google Drive – Everything

This option saves the whole Stable Diffusion Webui folder in your Google Drive. The default location is AI_PIC > stable-diffusion-webui. Installing models is no different from Windows or Mac. Below are the folder paths

  • Models: AI_PICS/stable-diffusion-webui/models/Stable-diffusion
  • Upscalers: AI_PICS/stable-diffusion-webui/models/ESRGAN
  • Lora: AI_PICS/stable-diffusion-webui/models/Lora
  • Embeddings: AI_PICS/stable-diffusion-webui/embeddings
  • Hypernetworks: AI_PICS/stable-diffusion-webui/hypernetworks

Installing a model from URL

You can install models from URLs using the Model_from_URL field. Currently, you can only install v1 models. Below’s an example input for installing DreamShaper from HuggingFace

https://huggingface.co/Lykon/DreamShaper/resolve/main/Dreamshaper_3.32_baked_vae_clip_fix_half.ckpt

(Link may not be correct as this model is updated frequently)

Saving a copy in Google Drive

You can optionally save a copy of the models in your Google Drive using Save_model_in_Google_Drive. They will be saved in the model loading location. AI_PICS/models.

This option only works when you save models and images but not everything in Google Drive.

Instruct-Pix2Pix

Editing photo with instruct pix2pix

Instruct-Pix2Pix is a Stable Diffusion model that lets you edit photos with text instruction alone.

To use the instruct-Pix2Pix model, check the instruct_pix2pix_model checkbox. Follow the instructions in this tutorial.

ControlNet

ControlNet is a Stable Diffusion model that can copy the composition and pose of the input image.

This notebook supports ControlNet. See the tutorial article.

Frequently asked questions

Do I need Google Colab Pro to use the notebook?

No, you can use most of the functions of this notebook with the free version of Google Colab. The only thing that doesn’t work is Stable Diffusion v2 768-pixel model (v2_1_768_model checkbox), which requires a higher RAM limit.

Do I need to use ngrok?

You don’t need to use ngrok to use the Colab notebook. In my experience, ngrok provides a more stable connection between your browser and the GUI. If you experience issues like buttons not responding, you should try ngrok.

Why do I keep getting disconnected?

There’s a human verification shortly after starting each Colab notebook session. You will get disconnected if you do not respond to it. Make sure to switch back to the Colab notebook and check for verification.

Is saving everything in Google Drive faster?

The first time is slower because you need to download things to your Google Drive, which has a slower speed. Later times range from 20% faster to 50% slower. This has something to do with the speed of accessing data in Google Drive.

Can I use the dreambooth models I trained?

Yes. Models typically need to be converted to be used in AUTOMATIC1111. But if you use the notebook in my Dreambooth tutorial, it has already been converted for you.

You will need to select save “small models, images and settings” in Google Drive option. Put your dreambooth model in AI_PICS/models. You can rename the model file if desired.

Next Step

If you are new to Stable Diffusion, check out the Absolute beginner’s guide.


Buy Me A Coffee

41 comments

  1. Any reason this is looking for this directory? mis-spelled?

    sed: can’t read /content/drive/MyDrive/AI_PICS/stable-diffusion-webui/reotpositories/stable-diffusion-stability-ai/ldm/util.py: No such file or directory

    1. Thanks for reporting!

      A typo was introduced during updates. This is a hack for better memory usage. Otherwise it will be out of memory when using controlnet.

  2. I keep getting this error, is it something on my end?

    ⏳ Installing Stable Diffusion WebUI …
    Tesla T4, 15360 MiB, 15101 MiB
    —————————————————————————
    NameError Traceback (most recent call last)
    in
    149
    150 get_ipython().system(‘mkdir -p {root}’)
    –> 151 os.chdir(root)
    152 get_ipython().system(‘apt-get -y install -qq aria2’)
    153 get_ipython().system(‘pip install pyngrok’)

    NameError: name ‘root’ is not defined

  3. I got this error when running for the first time:

    Mounted at /content/drive
    ⏳ Installing Stable Diffusion WebUI …
    Tesla T4, 15360 MiB, 15101 MiB
    —————————————————————————
    FileNotFoundError Traceback (most recent call last)
    in
    146 print(‘⏳ Installing Stable Diffusion WebUI …’)
    147 get_ipython().system(‘nvidia-smi –query-gpu=name,memory.total,memory.free –format=csv,noheader’)
    –> 148 os.chdir(root)
    149 get_ipython().system(‘apt-get -y install -qq aria2’)
    150 get_ipython().system(‘pip install pyngrok’)

    FileNotFoundError: [Errno 2] No such file or directory: ‘/content/drive/MyDrive/AI_PICS’

    I wonder what I did wrong.

  4. Hey, so this is like my pRIMO notebook – problem is – the extensions crash and won’t reload when i use it. is it not meant to save extensions to drive int he extensino folder you provide? Also is it not easier just to impliment what “LAST BEN” does – and literally just install SD to google drive? LOL. Only reason this one’s my primo is ngrok is so mcuh more stable, and i do say i’m quite used to. the cracked set up i ahve XD

    1. After installing some extensions, you need to restart the cell (not just the GUI). You are correct that extensions are not saved in Google Drive because it is problematic for some. Some extensions requires library install and it won’t persist through sessions.

      I experimented with installing the whole thing in google drive but it was not faster. The reason is writing to google drive is a lot slower than to colab’s temp storage. Another reason is you don’t need to deal with your previous install because it always starts a fresh copy.

  5. Hello! Wonderful guide, thank you very much for all the work you’ve put into it! I used to be able to use models from my Drive when there were still two boxes under the ‘load models from drive’ section (one box for links, one box for the name of the model itself I believe), but ever since the update I can’t seem to make things work. I’ve tried putting the model (safetensor) into AI_PICS/models, but it won’t appear in AUTOMATIC. Any clues on what I might be doing wrong? Thanks in advance!

  6. Hi there, everything was working great but now it’s giving this error (only when I use premium, not in basic). Have you had any issues?

  7. Great guide! Everything works well, except that I have two problems.

    Firstly, when I put a link to the direct download of a model in the “Model_from_URL:” field, it downloads the model during startup and there are no errors. However, when I go into the automatic1111 interface later, I can not select the model on the top left corner. Only the other models that I ticked from the preselected models are visible.

    Secondly, When I do inpainting for example with the v1.5 inpainting model and then want to switch to another model, for example f222, the colab program crashes and I only see ^c at the end, basically shutting everything down.

    Would be really great if you could help me with these two issues!

    1. Hi, can you check if the url is correct? It should download a large file ( > 2GB) when you put it on a browser. If it is correct, please send me an example URL that doesn’t work.

      I haven’t experienced this issue in v1 models. But it looks like a memory issue. It can also happen when you use v2 768px models. One “easy” solution is to use Google Pro, which has a higher RAM limit. But I will look into it.

  8. I think I just found the best tutorial in here. This is a very simple and useful(powerful) note. But I still have a few question:
    1. I couldn’t find any installation files on my google drive. Does this mean I need to download all the models again when I re-run this note?
    2. After reboot-ui, the connection seems failed and I have to restart the whole thing. Is this normal or I should use ngrok to secure the connection?
    3. Local 127.0.0.1 seems not working, I do have local sd-webui but I didn’t run it. Any suggestion?

    Thanks again for your efforts to share this wonderful tutorial.

    1. Hi Silver, glad you find this notebook useful! It is an effort to make SD more accessible.

      1. Yes, every time you rerun the notebook, it installs and runs a fresh copy. You are correct on downloading models every time. Ideally, you will only check the models you will use to keep the startup time short.
      You can add models by stopping the cell, checking more models, and rerunning.
      There will be no download if using models on your Google Drive with the custom model field.

      2. After rebooting webui, you will get a new link on the colab console. Follow the new link.

      3. Localhost is not supposed to work on colab. That’s why you need ngork or radio links.

  9. Was thinking about it, but after some testing, realized the google collab runs about as fast as my PC, so I may run off my Desktop. Do you have a tutorial on Checkpoint merges

  10. I see the same error. And the only model he see is “V1-5-pruned”…
    Can I train a dreambooth model directly from WebUi interface ?

    1. That’s strange. I recreated your file location and file name and can see the hp-15 model showing up. In the file browser of colab, do you see the file /content/stable-diffusion-webui/models/Stable-diffusion/hp-15.ckpt? If the link exist, it should be there, and you should see it in the model selection dropbox.

      You can train dreambooth in webui but not recommended because it is buggy. And it won’t solve this problem.

  11. Download Results:
    gid |stat|avg speed |path/URI
    ======+====+===========+=======================================================
    09f50d|OK | 0B/s|/content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt

    Status Legend:
    (OK):download completed.
    ln: failed to create symbolic link ‘/content/stable-diffusion-webui/models/Stable-diffusion/hp-15.ckpt’: File exists

    1. This appears to be not your first time running the notebook after connection because the link to your model has already been created.

      You should be able to see and select hp-15 model in the model dropdown menu. can you?

      Do you see error if you restart the colab notebook runtime?

  12. Hello Andrew !

    Wow amazing work here 🙂 Thank’s a lot !

    Need some help to get my own dreambooth finetuned model (hp-15.ckpt) downloaded…I am not able access it from the WebUi.

    The file is in my Google Drive – AI_PICS/stable-diffusion-webui/models/hp-15.ckpt

    Any idea ?

    Thanks !
    Henri

    1. Hi Henri, putting “AI_PICS/stable-diffusion-webui/models/hp-15.ckpt” to custom_model field should work… What’s the error message on the Colab notebook?

    1. Hi, after downloading the Quick Start Guide, you should see Option 2: Google Colab. There’s a highlight box with an hyperlink icon, and a link to the Colab notebook.

  13. Hi thank you for sharing these very useful stuffs!

    I have a question: I noticed that the current Colab will use v1-5-pruned-emaonly.ckpt version of stable diffusion instead of the original v1-5-pruned.ckpt. I tried to install model from URL https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt but it sent me an error like this:

    Loading weights [716604cc8a] from /content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
    loading stable diffusion model: UnpicklingError

    How are we suppose to use the non-emaonly version?

    1. Hi, I believe the URL is incorrect. If you go to this URL, it will go to a webpage instead of downloading the model. (Note that a huggingface link for a model file is …../blob/resolve/…. instead of …./blob/main/….. Go to this page and copy the link that actually downloads the model should work.

      A side note: I believe the result are the same between this version and ema-only. ema-only is a smaller file. The original one contains two sets of weights which you don’t need unless you are training your own model.

  14. Hey,

    under “Train” -> “Preprocess images”: How do i define the path to “Source directory” and “Destination directory” ?

    1. source dir is a directory containing your images in google colab environment. Destination directory is an output directory you.

      Check out automatic1111’s documentation on github.

  15. – Thanks for the clarification, now I can work in peace 🙂

    – For #2, is there a way to determine what model version one deals with? For example, if you look at `rachelwalkerstylewatercolour ` referenced earlier, there is no version number or anything like it. I tried it with my 2.1 setup and it didn’t work, but I don’t know the root cause (could be something I misconfigured, or maybe it won’t work in principle). Is there a checklist for what to look at? (e.g., is it for the same image size? is it compatible with my version of SD? etc.)

    – Re: GIMP I am specifically interested in a workflow that occurs entirely inside GIMP, so there is a single UI for everything, without a need to save intermediate images and move them around. This would save a lot of time.

    I use SD to produce illustrations for a children’s poetry book that I write. When there’s less moving around between windows, it is easier to maintain focus on the creative part of the equation.

    1. Hi Alex, I just checked the rachelwalkerstylewatercolour model. It is v1. I was able to download the ckpt file to the model directory and load without issue.

      Potentially we can look at the network structure but its a hassle to do manually. I usually look at the file size. Because most are using the same tool to produce, v1 models are either 2 or 4 GB.

  16. 1. Can you also explain how to correctly start it up after, say, an overnight break? I followed other guides I found online and they work well for the first time. But if I run the notebook again – various errors occur (e.g., some files already exist, etc.) – so I am never able to smoothly resume work, except by deleting everything, creating a new Colab notebook, etc. There definitely ought to be a better way to do it.

    2. When dealing with custom models at step 5 – how do I know which ones would be compatible? For example, I want to use this one: huggingface.co/BunnyViking/rachelwalkerstylewatercolour along with SD 2.1, would it work out? My previous attempt to do so resulted in some cryptic Python errors that didn’t make sense to me, so I am under the impression that I cannot arbitrarily combine models, that there are requirements that need to be taken into account.

    p.s. I’ve been following your tutorials so far and they’re quite informative, thank you for your work. I’d be interested in materials that explain how to integrate this into GIMP in tandem with SD running on Colab. There are various guides that cover this topic, but the ones on your site actually work without having to improvise and do all sorts of acrobatics to cover the gaps – I’d like to read YOUR tutorial about it.

    1. Hi Alex,

      1. This notebook can be run again after it is disconnected, say overnight. You don’t need to change anything. Every time it runs, it pulls the latest version of AUTOMATIC1111.

      2. The ones that are available for selection in the notebook are compatible. All you need to do is to check the box next to one. It’s a bit more set up if you want to install ones that are not on the list. Installing v2 model is similar to installing 2.1, you will need a config file with the same name. See:
      https://stable-diffusion-art.com/install-stable-diffusion-2-1/
      For installing v1 models, see
      https://stable-diffusion-art.com/models/#How_to_install_and_use_a_model

      Re: Using GIMP with SD
      I have written this tutorial for end-to-end workflow, with some steps using GIMP

      https://stable-diffusion-art.com/workflow/

      Are there any special topics you are interested in?

Leave a Reply