A killer application of Stable Diffusion is training your own model. Being an open-source software, the community has developed easy-to-use tools for that.
Training LoRA models is a smart alternative to checkpoint models. Although it is less powerful than whole-model training methods like Dreambooth or finetuning, LoRA models have the benefit of being small. You can store many of them without filling up your local storage.
Why train your own model? You may have an art style you want to put in Stable Diffusion. Or you want to generate a consistent face in multiple images. Or it’s just fun to learn something new!
In this post, you will learn how to train your own LoRA models using a Google Colab notebook. So, you don’t need to own a GPU to do it.
This tutorial is for training a Stable Diffusion v1 LoRA or LyCORIS model. (In AUTOMATIC1111 WebUI, they are all called Lora.)
Software
You will use a Google Colab notebook to train the Stable Diffusion LoRA model. No GPU hardware is required from you.
You will need Stable Diffusion software to use the LoRA model. I recommend using AUTOMATIC1111 Stable Diffusion WebUI.
Get the Quick Start Guide to find out how to start using Stable Diffusion.
Train a Lora model
Step 1: Collect training images
The first step is to collect training images.
Let’s pay tribute to Andy Lau, one of the four Heavenly Kings of Cantopop in Hong Kong, and immortalize him in a Lora…

Google Image Search is a good way to collect images.

You need at least 15 training images.
It is okay to have images with different aspect ratios. Make sure to turn on the bucketing option in training, which sorts the images into different aspect ratios during training.
Pick images that are at least 512×512 pixels for v1 models.
Make sure the images are either PNG or JPEG formats.
I collected 16 images for training. You can download them to follow this tutorial.
Step 2: Upload images to Google Drive
Open the LoRA trainer notebook.
You will need to save the training images to your Google Drive so the LoRA trainer can access them. Use the LoRA training notebook to upload the training images.

Here is some input to review before running the cell.
Project_folder: A folder in Google Drive containing all training images and captions. Use a folder name that doesn’t exist yet.
dataset_name: The name of the dataset.
Number_of_epoches: How many times each image will be used for training.
Lora_output_path: A folder in Google Drive where the Lora file will be saved.
Run this cell by clicking the Play button on the left. It will ask you to connect to your Google Drive.
Click Choose Files and select your training images.
When it is done, you should see a message saying the images were uploaded successfully.
There are three folder paths listed. We will need them later.

Now, go to your Google Drive. The images should be uploaded in My Drive > AI_PICS > training > AndyLau > 100_AndyLau.
It should look like the screenshot below.

Note: All image folders inside the project folder will be used for training. You only need one folder in most cases. So, change to a different project name before uploading a new image set.
Step 3: Create captions
You need to provide a caption for each image. They must be a text file with the same name as an image containing the caption. We will generate the captions automatically using the LoRA trainer.
Running the LoRA trainer
Go to the Train Lora cell. Review the username and password. You will need them after starting the GUI.
Start the notebook by clicking the Play button in the Lora trainer cell.

It will take a while to load. It is ready when you see the Gradio.live link.

A new tab showing the Kohya_ss GUI should have opened.
Go to the Utilities page. Select the Captioning tab, and then BLIP Captioning sub-tab.

Image path
You can find the image folder to caption in the printout of the first cell, after uploading the images.

/content/drive/MyDrive/AI_PICS/training/AndyLau/100_AndyLau
Other settings
The auto caption could sometimes be too short. Set the Min length to 20.
Start auto-captioning
Press the Caption Images button to generate a caption for each image automatically.
Check the Google Colab Notebook for status. It should be running the captioning model. You will see the message “captioning done” when the captioning is completed.

Revising the captions
You should read and revise each caption so that they match the images. You must also add the phrase “Andy Lau” to each caption.
This is better done in your Google Drive page. You should see a text file with the same name generated for each image.

For example, the auto-generated caption of the first image is
A man in a black jacket smoking a cigarette in front of a fenced in building
We want to include the keyword Andy Lau. The revised prompt is
Andy Lau in a black jacket smoking a cigarette in front of a fenced in building
Use the Open with… function to use your favorite editor to make the change. The default one should work, even if it launches locally on your PC. Save the changes. You may need to refresh the Google Drive page to see changes.

Revise the captions for all images.
When you are done, go through the text files one more time to make sure all have included “Andy Lau” in it. This is important for training a specific person.
You can download the captions I created to follow the tutorial if you wish.
Step 4: LoRA training
We now have images and captions. We are ready to start the LoRA training!
Source model
In Kohya_ss GUI, go to the LoRA page. Select the Training tab. Select the Source model sub-tab. Review the model in Model Quick Pick.

Some popular models you can start training on are:
Stable Diffusion v1.5
runwayml/stable-diffusion-v1-5
The Stable Diffusion v1.5 model is the latest version of the official v1 model.
Realistic Vision v2
SG161222/Realistic_Vision_V2.0
Realistic Vision v2 is good for training photo-style images.
Anything v3
https://huggingface.co/Linaqruf/anything-v3.0
Anything v3 is good for training anime-style images.
Folders
Now, switch to the Folders sub-tab.

In the Image folder field, enter the folder CONTAINING the image folder. You can copy the path from Lora Image folder in the printout after uploading the images.

/content/drive/MyDrive/AI_PICS/training/AndyLau
In the Ouput folder field, enter the location where you want the LoRA file to be saved. You can copy the path from Lora output folder in the printout after uploading the images.

The default location is the Lora folder of the Stable Diffusion notebook so that it can be directly used in the WebUI.
/content/drive/MyDrive/AI_PICS/Lora
Finally, name your LoRA in the Model output name field.
AndyLau100
Parameters
Now, switch to the Parameters sub-tab. If you have just started out in training LoRA models, using a Preset is the way to go. Select sd15-EDG_LoraOptiSettings
for training a Standard LoRA.

There are presets for different types of LyCORIS, which are more powerful versions of LoRA. See the LyCORIS tutorial for a primer.
Finally, the T4 GPU on Colab doesn’t support bp16 mix precision. You MUST
- Change Mixed precision and Save precision to fp16.
- Change Optimizer to AdamW.

Start training!
Now everything is in place. Scroll down and click Start training to start the training process.

Check the progress on the Colab Notebook page. It will take a while.
It is okay to show some warnings. The training fails when it encounters an error.
When it is completed successfully, you should see the progress is at 100%. The loss value should be a number, not nan.

Using the LoRA
If you save the LoRA in the default output location (AI_PICS/Lora
), you can easily use the Stable Diffusion Colab Notebook to load it.
Open AUTOMATIC1111 Stable Diffusion WebUI in Google Colab. Click the Extra Networks button under the Generate button. Select the Lora tab and click the LoRA you just created.

Here are the prompt and the negative prompt:
Andy Lau in a suit, full body <lora:AndyLau100:1>
ugly, deformed, nsfw, disfigured
Since we have used the phrase “Andy Lau” in the training caption, you will need it in the prompt to take effect.
Although the LoRA is trained on the Stable Diffusion v1.5 model, it works equally well with the Realistic Vision v2 model.
Here are the results of the Andy Lau LoRA.


Remarks
This step-by-step guide shows you how to train a LoRA. You can select other presets to train a LyCORIS.
Important parameters in training are:
- Network rank: the size of the LoRA. The higher, the more information the LoRA can store. (Reference value: 64)
- Network alpha: a parameter for preventing the weight from collapsing to zero during training. Increasing it increases the effect. The final effect is controlled by network alpha divided by network rank. (Reference value: 64)
Testing the LoRA weight (<lora:AndyLau100:weight>) when using the LoRA. Sometimes 1 may not be the optimal value.
Reference
LoRA training parameters – An authoritative reference of training parameters.
LEARN TO MAKE LoRA – A graphical guide to training LoRA.
kohya_ss Documentation – English translation of kohya_ss manual.
A quick note for people having an issue with the following fatal error
mportError: cannot import name ‘StableDiffusionPipeline’ from ‘diffusers’ (E:\Py\env\lib\site-packages\diffusers_init_.py)
open the command option in google collab (at the bottom left)
run these two commands
pip uninstall diffusers
pip install diffusers
this fixed it for me
source:
https://stackoverflow.com/questions/73992681/importerror-cannot-import-name-stablediffusionpipeline-from-diffusers
There does seem to be another issue
having an issue fixing this one.
Downloading (…)del.fp16.safetensors: 100% 1.72G/1.72G [01:28<00:00, 19.5MB/s]
Fetching 11 files: 100% 11/11 [01:30<00:00, 8.24s/it]
Loading pipeline components…: 100% 4/4 [00:01<00:00, 2.04it/s]
Traceback (most recent call last):
File "/content/kohya_ss/./sdxl_train_network.py", line 176, in
trainer.train(args)
File “/content/kohya_ss/train_network.py”, line 214, in train
model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
File “/content/kohya_ss/./sdxl_train_network.py”, line 37, in load_target_model
) = sdxl_train_util.load_target_model(args, accelerator, sdxl_model_util.MODEL_VERSION_SDXL_BASE_V1_0, weight_dtype)
File “/content/kohya_ss/library/sdxl_train_util.py”, line 34, in load_target_model
) = _load_target_model(
File “/content/kohya_ss/library/sdxl_train_util.py”, line 84, in _load_target_model
pipe = StableDiffusionXLPipeline.from_pretrained(
File “/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/pipeline_utils.py”, line 1191, in from_pretrained
raise ValueError(
ValueError: Pipeline expected {‘vae’, ‘text_encoder’, ‘unet’, ‘tokenizer’, ‘scheduler’, ‘text_encoder_2’, ‘tokenizer_2’}, but only {‘vae’, ‘text_encoder’, ‘unet’, ‘tokenizer’, ‘scheduler’} were passed.
Traceback (most recent call last):
File “/usr/local/bin/accelerate”, line 8, in
sys.exit(main())
File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py”, line 47, in main
args.func(args)
File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py”, line 986, in launch_command
simple_launcher(args)
File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py”, line 628, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command ‘[‘/usr/bin/python3’, ‘./sdxl_train_network.py’, ‘–enable_bucket’, ‘–min_bucket_reso=256’, ‘–max_bucket_reso=2048’, ‘–pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5’, ‘–train_data_dir=/content/drive/MyDrive/AI_PICS/training/ChrisAllen’, ‘–resolution=512,650’, ‘–output_dir=/content/drive/MyDrive/AI_PICS/Lora’, ‘–network_alpha=64’, ‘–save_model_as=safetensors’, ‘–network_module=networks.lora’, ‘–text_encoder_lr=5e-05’, ‘–unet_lr=0.0001’, ‘–network_dim=64’, ‘–output_name=blah’, ‘–lr_scheduler_num_cycles=1’, ‘–no_half_vae’, ‘–learning_rate=0.0001’, ‘–lr_scheduler=constant’, ‘–train_batch_size=3’, ‘–max_train_steps=3500’, ‘–save_every_n_epochs=1’, ‘–mixed_precision=fp16’, ‘–save_precision=fp16’, ‘–seed=1234’, ‘–caption_extension=.txt’, ‘–cache_latents’, ‘–cache_latents_to_disk’, ‘–optimizer_type=AdamW’, ‘–max_data_loader_n_workers=1’, ‘–clip_skip=2’, ‘–bucket_reso_steps=64’, ‘–mem_eff_attn’, ‘–xformers’, ‘–bucket_no_upscale’, ‘–noise_offset=0.05′]’ returned non-zero exit status 1.
03:18:00-173236 INFO There is no running process to kill.
Hi Andrew,
I have one request
Can you please look at this article https://civitai.com/articles/2345
and try to create the same results using automatic1111?
This can be a great article, and I will be willing to pay for it.
You can create a separate extension.
Please let me know.
MessageError Traceback (most recent call last)
in ()
2 #@markdown Begineers: Use a different `Project_folder` each time when you upload the images.
3 from google.colab import drive
—-> 4 drive.mount(‘/content/drive’)
5
6 Project_folder = ‘AI_PICS/training/AndyLau’ #@param {type:”string”}
3 frames
/usr/local/lib/python3.10/dist-packages/google/colab/_message.py in read_reply_from_input(message_id, timeout_sec)
101 ):
102 if ‘error’ in reply:
–> 103 raise MessageError(reply[‘error’])
104 return reply.get(‘data’, None)
105
MessageError: Error: credential propagation was unsuccessful
Why do I see this error message pop up
It appears that you didn’t connect your google drive with the notebook. It has to be the same account as the colab.
Thank you for the amazing tutorial. I had to go back on Gumroad and give you some money. That’s the one tutorial that was easy with amazing description to follow. This make having fun with AI for the common folk possible. Gradio did give me some issues with crashing, but otherwise, this has been a smooth ride. Thank you so much!
Thank you!
any idea why I get “Unexpected token ‘<', " <!DOCTYPE "… is not valid JSON" this error in kohya when I try and caption and when I try and set folder for training?
Look like it is caused by an invalid character in the captions. Try use only english, comma and period.
is there a way to do it without the gpu ? I try few time and lose the free version with stupid mistake and nether could do my lora… now i’m out of gpu and stuck to follow this tuto :/
I’m on mac os for the record
No but you can always create a new google account.
Is there a way to resume training of a lora which went on for 100 epochs? If so where?
Thanks for answer.
1. So, should we then check option “Enable buckets” in Parameters? It is unchecked by default.
“Finally, name your LoRA in the Model output name field. AnyLau100”
Please fix to AndyLau100
good call! fixed.
By the way, two questions:
1. Why do you say that it is not necessary to crop images to 512*512px? All manuals on training LORA recommend to do this.
2. What’s the reason to change “a man” to “Andy Lau”? I didn’t change it and LORA successfully worked.
1. Those must be old guides. New trainers have a function called bucketing that can make use of images with different aspect ratios.
2. Using “a man” would make all men look like Andy Lau. (which may not be a bad thing!)
Ok, this is great, but my google drive is “My Drive” with a space between my and drive. I assuming this wont work for me because the process wont recognize the space? is there another way around this? apparently I can’t rename my google drive?
This should not matter. Have you tried?
where is the notebook link?
Work with sdxl?
thank you
nevermind, i was blind lol
But work with sdxl?
lol.
I haven’t test SDXL but it has the option to. you are welcome to try. And let me know…
Simple and to the point, colab too !
Thank you for these tutorials
hello! fantastic tutorial!
i want to make a lora of my friend (i have his permission). i have some fantastic high -res colored as well as black and white/graycale images of him.
my question is, can i include the black and white/grayscale images i have of him in the training images dataset?
looking to hear from you!