How to run SD Forge WebUI on Google Colab

Updated Categorized as Tutorial Tagged , 1 Comment on How to run SD Forge WebUI on Google Colab

Stable Diffusion Forge WebUI has emerged as a popular way to run Stable Diffusion and Flux AI image models. It is optimized to run fast and pre-installed with many essential extensions. If you don’t have a powerful GPU card, you can run Forge on Google Colab.

This is a detailed guide for using the Google Colab notebook. You can access the notebook by getting the Quick Start Guide.

What is SD Forge?

Once upon a time, AUTOMATIC1111 WebUI (A1111) was the go-to software for running Stable Diffusion locally. Armed with a large “Generate” button, A1111 is perfect for interactive AI image creation.

However, the development of A1111 has lagged behind. It doesn’t support the latest local models like Flux. Optimized to run fast and supporting Flux, SD Forge WebUI has gained popularity in

Using SD Forge in Google Colab

Google Colab (Google Colaboratory) is an interactive computing service offered by Google. It is a Jupyter Notebook environment that allows you to execute code.

Due to the computing resources required (High RAM), you need a Google Pro and Pro+ to run Forge on Colab.

I recommend using the Colab Pro plan. It gives you 100 compute units per month on T4, which are about 50 hours on a standard GPU. (It’s a steal)

Alternatives

Think Diffusion provides fully managed Forge/ComfyUI/AUTOMATIC1111 online service. They cost a bit more than Colab but provide a better user experience by installing models and extensions. They offer 20% extra credit to our readers. (Affiliate link)

Running SD Forge on Colab

Step 0: Sign up

Sign up a Google Colab Pro or Pro+ plans. (I use Pro.)

Step 1: Open the Forge Colab notebook

Open the Forge Colab notebook in the Quick Start Guide. You should see the notebook with the second cell below.

Note: For quick start, you can skip the following steps and run the notebook with the default settings.

Set the username and password. You will need to enter them before using Forge.

Step 2: Select models

Review which model you want to use.

The more you select, the more time it takes to download. They will be downloaded in the Colab drive, not your Google Drive.

Step 3: Run the notebook

Click the Play button on the left of the cell to start.

Start-up should complete within a few minutes. How long it takes depends on how many models you include. When it is done, you should see the message below.

Step 4: Start Forge

Follow the gradio.live link to start Forge.

Enter the username and password you specified in the notebook.

You should see the Forge GUI after you log in.

Put in “a cat” in the prompt text box and press Generate to test using Stable Diffusion. You should see it generates an image of a cat.

Runtime type

You can pick a faster runtime to speed up the generation.

Click downward caret on top right and then select Change runtime type.

This notebook only supports GPU. Below is the approximate performance for Flux.1 Dev.

  • T4 GPU: ~ 2.5 mins per image.
  • L4 GPU: ~ 30 secondsw per image.

ngrok (Optional)

If you run into display issues with the GUI, you can try using ngrok instead of Gradio to establish the public connection. It is a more stable alternative to the default gradio connection.

You will need to set up a free account and get an authoken.

  1. Go to https://ngrok.com/
  2. Create an account
  3. Verify email
  4. Copy the authoken from https://dashboard.ngrok.com/get-started/your-authtoken and paste in the ngrok field in the notebook.

The Stable Diffusion cell in the notebook should look like below after you put in your ngrok authtoken.

Click the play button on the left to start running. When it is done loading, you will see a link to  ngrok.io in the output under the cell. Click the ngrok.io link to start AUTOMATIC1111. The first link in the example output below is the ngrok.io link.

When you visit the ngrok link, it should show a message like below

 Click on Visit Site to Start AUOTMATIC1111 GUI. Occasionally, you will see a warning message that the site is unsafe to visit. It is likely because someone used the same ngrok link to put up something malicious. Since you are the one who created this link, you can ignore the safety warning and proceed.

When you are done

When you finish using the notebook, don’t forget to click “Disconnect and delete runtime” in the top right drop-down menu. Otherwise, you will continue to consume compute credits.

Computing resources and compute units

Computing units and usage rate.

To view computing resources and credits, click the downward caret next to the runtime type (E.g. T4, High RAM) on the top right. You will see the remaining compute units and usage rate.

Models available

For your convenience, the notebook has options to load some popular models. You will find a brief description of them in this section.

Flux models

Flux AI is a state-of-the art AI model that produces stunning images. You can use

  • Flux.1 Dev: The full deveopment Flux model.
  • Flux.1 Schnell: The fast version.

v1.5 models

Stable Diffusion 1.5

v1.5 model is released after 1.4. It is the last v1 model. Images from this model is very similar to v1.4. You can treat the v1.5 model as the default v1 base model.

v1.5 inpainting model

The official v1.5 model trained for inpainting.

Realistic Vision

Realistic Vision v2 is good for generating anything realistic, whether they are people, objects, or scenes.

F222

F222

F222 is good at generating photo-realistic images. It is good at generating females with correct anatomy.

Caution: F222 is prone to generating explicit images. Suppress explicit images with a prompt “dress” or a negative prompt “nude”.

Dreamshaper

Dreamshaper

Model Page

Dreamshaper is easy to use and good at generating a popular photorealistic illustration style. It is an easy way to “cheat” and get good images without a good prompt!

Open Journey Model

Open Journey Model.

Model Page

Open Journey is a model fine-tuned with images generated by Mid Journey v4. It has a different aesthetic and is a good general-purpose model.

Triggering keyword: mdjrny-v4 style

Anything v3

Anything v3 model.

Model Page

Anything V3 is a special-purpose model trained to produce high-quality anime-style images. You can use danbooru tags (like 1girl, white hair) in the text prompt.

It’s useful for casting celebrities to amine style, which can then be blended seamlessly with illustrative elements.

Inkpunk Diffusion

Inkpunk Diffusion model.

Inkpunk Diffusion is a Dreambooth-trained model with a very distinct illustration style.

Model Page

Use keyword: nvinkpunk

SDXL model

SDXL

This Coalb notebook supports SDXL 1.0 base and refiner models.

Select SDXL_1 to load the SDXL 1.0 model.

Important: Don’t use VAE from v1 models. Go to Settings > Stable Diffusion. Set SD VAE to AUTOMATIC or None.

Check out some SDXL prompts to get started.

ControlNet models

Forge comes with the ControlNet extension installed but you still need to download the ControlNet models.

Alternatively, you can put the ControlNet models in the Google Drive folder AI_PICS > models > ControlNet.

Installing models

There are two ways to install models that are not on the model selection list.

  1. Use the Checkpoint_models_from_URL and LoRA_models_from_URL fields.
  2. Put model files in your Google Drive.

Install models using URLs

You can only install checkpoint or LoRA models using this method.

Put in the download URL links in the field. The link you initiate the file download when you visit it in your browser.

  • Checkpoint_models_from_URL: Use this field for checkpoint models.
  • LoRA_models_from_URL: Use this field for LoRA models.

Some models on CivitAi needs an API key to download. Go to the account page on CivitAI to create a key and put it in Civitai_API_Key.

Below is example of getting the download link on CivitAI.

Put it in the Model_from_URL field.

Installing models in Google Drive

After running the notebook for the first time, you should see the folder AI_PICS > models created in your Google Drive. The folder structure inside this folder mirror AUTOMATIC1111‘s and is designed to share models with other notebooks from this site.

Put your model files in the corresponding folder. For example,

  • Put checkpoint model files in AI_PICS > models > Stable-diffusion.
  • Put LoRA model files in AI_PICS > models > Lora.

You will need to restart the notebook to see the new models on Forge.

Installing extensions from URL

You can install any number of extensions by using this field. You will need the URL of the Github page of the extension.

For example, put in the following if you want to install the Civitai model extension.

https://github.com/civitai/sd_civitai_extension

You can also install multiple extensions. The URLs need to be separated with commas. For example, the following URLs install the Civitai and the multi-diffusion extensions.

https://github.com/civitai/sd_civitai_extension,https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111

Extra arguments to webui

You can add extra arguments to the Web-UI by using the Extra_arguments field.

Other useful arguments are

Regional prompter lets you use different prompts for different regions of the image. It is a valuable extension for controlling the composition and placement of objects.

Frequently asked questions

Do I need a paid account to use the notebook?

Yes, you need a paid Google Colab account to use this notebook. Google has blocked the free usage of Stable Diffusion.

Is there any alternative to Google Colab?

Yes, Think Diffusion provides fully-managed Forge/AUTOMATIC1111/ComfyUI WebUI web service. They offer 20% extra credit to our readers. (Affiliate link)

Do I need to use ngrok?

You don’t need to use ngrok to use the Colab notebook. In my experience, ngrok provides a more stable connection between your browser and the GUI. If you experience issues like buttons not responding, you should try ngrok.

Can I use the checkpoint and LoRA models I trained?

Yes, put the model file in the corresponding folder in Google Drive.

  • Checkpoint models: AI_PICS > models > Stable-diffusion.
  • LoRA models: AI_PICS > models > Lora.

Why do my SDXL images look garbled?

Check to make sure you are not using a VAE from v1 models. Check Settings > Stable Diffusion > SD VAE. Set it to None or Automatic.

Next Step

If you are new to Stable Diffusion, check out the Absolute beginner’s guide.

Avatar

By Andrew

Andrew is an experienced engineer with a specialization in Machine Learning and Artificial Intelligence. He is passionate about programming, art, photography, and education. He has a Ph.D. in engineering.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *