- This topic has 2 replies, 2 voices, and was last updated 10 months, 4 weeks ago by Siva Shanmugam Manickam.
-
AuthorPosts
-
-
January 2, 2024 at 9:36 pm #10371
<h3>Easy LoRA training tutorial – train your own Stable Diffusion model</h3>
https://andrewongai.gumroad.com/l/lora_trainingI’m receiving error after training the model using the images provided by Andrew. Appreciate any suggestion to resolve the error.
Error :
<div>:28-323804 INFO accelerate launch –num_cpu_threads_per_process=2 “./train_network.py”</div>
<div> –pretrained_model_name_or_path=”runwayml/stable-diffusion-v1-5″</div>
<div> –train_data_dir=”/content/drive/MyDrive/AI_PICS/training/AndyLauGanesh”</div>
<div> –resolution=”512,650″ –output_dir=”/content/drive/MyDrive/AI_PICS/Lora”</div>
<div> –network_alpha=”64″ –save_model_as=safetensors</div>
<div> –network_module=networks.lora –text_encoder_lr=5e-05 –unet_lr=0.0001</div>
<div> –network_dim=64 –output_name=”lastganeshhelp”</div>
<div> –lr_scheduler_num_cycles=”1″ –no_half_vae –learning_rate=”0.0001″</div>
<div> –lr_scheduler=”constant” –train_batch_size=”3″ –max_train_steps=”534″</div>
<div> –save_every_n_epochs=”1″ –mixed_precision=”bf16″ –save_precision=”bf16″</div>
<div> –seed=”1234″ –caption_extension=”.txt” –cache_latents</div>
<div> –optimizer_type=”AdamW” –max_data_loader_n_workers=”1″ –clip_skip=2</div>
<div> –bucket_reso_steps=64 –mem_eff_attn –xformers –bucket_no_upscale</div>
<div> –noise_offset=0.05</div>
<div>/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ‘/usr/local/lib/python3.10/dist-packages/torchvision/image.so: undefined symbol: _ZN3c104cuda9SetDeviceEi’If you don’t plan on using image functionality fromtorchvision.io
, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you havelibjpeg
orlibpng
installed before buildingtorchvision
from source?</div>
<div> warn(</div>
<div>The following values were not passed toaccelerate launch
and had defaults used instead:</div>
<div>--num_processes
was set to a value of0
</div>
<div>--num_machines
was set to a value of1
</div>
<div>--mixed_precision
was set to a value of'no'
</div>
<div>--dynamo_backend
was set to a value of'no'
</div>
<div>To avoid this warning pass in values for each of the problematic parameters or runaccelerate config
.</div>
<div>2024-01-02 03:39:44.970337: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered</div>
<div>2024-01-02 03:39:44.970504: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered</div>
<div>2024-01-02 03:39:45.161302: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered</div>
<div>2024-01-02 03:39:48.650217: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT</div>
<div>/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ‘/usr/local/lib/python3.10/dist-packages/torchvision/image.so: undefined symbol: _ZN3c104cuda9SetDeviceEi’If you don’t plan on using image functionality fromtorchvision.io
, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you havelibjpeg
orlibpng
installed before buildingtorchvision
from source?</div>
<div> warn(</div>
<div>prepare tokenizer</div>
<div>vocab.json: 100% 961k/961k [00:00<00:00, 14.7MB/s]</div>
<div>merges.txt: 100% 525k/525k [00:00<00:00, 14.5MB/s]</div>
<div>special_tokens_map.json: 100% 389/389 [00:00<00:00, 1.14MB/s]</div>
<div>tokenizer_config.json: 100% 905/905 [00:00<00:00, 1.64MB/s]</div>
<div>Using DreamBooth method.</div>
<div>prepare images.</div>
<div>found directory /content/drive/MyDrive/AI_PICS/training/AndyLauGanesh/100_AndyLauganesh contains 16 image files</div>
<div>1600 train images with repeating.</div>
<div>0 reg images.</div>
<div>no regularization images / 正則化画像が見つかりませんでした</div>
<div>[Dataset 0]</div>
<div> batch_size: 3</div>
<div> resolution: (512, 650)</div>
<div> enable_bucket: False</div>
<div></div>
<div> [Subset 0 of Dataset 0]</div>
<div> image_dir: “/content/drive/MyDrive/AI_PICS/training/AndyLauGanesh/100_AndyLauganesh”</div>
<div> image_count: 16</div>
<div> num_repeats: 100</div>
<div> shuffle_caption: False</div>
<div> keep_tokens: 0</div>
<div> caption_dropout_rate: 0.0</div>
<div> caption_dropout_every_n_epoches: 0</div>
<div> caption_tag_dropout_rate: 0.0</div>
<div> caption_prefix: None</div>
<div> caption_suffix: None</div>
<div> color_aug: False</div>
<div> flip_aug: False</div>
<div> face_crop_aug_range: None</div>
<div> random_crop: False</div>
<div> token_warmup_min: 1,</div>
<div> token_warmup_step: 0,</div>
<div> is_reg: False</div>
<div> class_tokens: AndyLauganesh</div>
<div> caption_extension: .txt</div>
<div></div>
<div></div>
<div>[Dataset 0]</div>
<div>loading image sizes.</div>
<div>100% 16/16 [00:00<00:00, 449.34it/s]</div>
<div>prepare dataset</div>
<div>preparing accelerator</div>
<div>loading model for process 0/1</div>
<div>load Diffusers pretrained models: runwayml/stable-diffusion-v1-5</div>
<div>model_index.json: 100% 541/541 [00:00<00:00, 1.85MB/s]</div>
<div>Fetching 9 files: 0% 0/9 [00:00<?, ?it/s]</div>
<div>vae/config.json: 100% 547/547 [00:00<00:00, 1.96MB/s]</div>
<div></div>
<div>text_encoder/config.json: 0% 0.00/617 [00:00<?, ?B/s]</div>
<div></div>
<div>text_encoder/config.json: 100% 617/617 [00:00<00:00, 466kB/s]</div>
<div></div>
<div>unet/config.json: 100% 743/743 [00:00<00:00, 406kB/s]</div>
<div>scheduler/scheduler_config.json: 100% 308/308 [00:00<00:00, 828kB/s]</div>
<div></div>
<div>(…)ature_extractor/preprocessor_config.json: 100% 342/342 [00:00<00:00, 918kB/s]</div>
<div>Fetching 9 files: 11% 1/9 [00:01<00:08, 1.02s/it]</div>
<div>diffusion_pytorch_model.safetensors: 0% 0.00/3.44G [00:00<?, ?B/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 0% 0.00/335M [00:00<?, ?B/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 0% 0.00/492M [00:00<?, ?B/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 2% 10.5M/492M [00:00<00:06, 73.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 0% 10.5M/3.44G [00:00<01:05, 52.6MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 3% 10.5M/335M [00:00<00:07, 45.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 1% 21.0M/3.44G [00:00<00:55, 61.5MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 4% 21.0M/492M [00:00<00:09, 51.1MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 6% 21.0M/335M [00:00<00:06, 49.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 1% 31.5M/3.44G [00:00<00:50, 67.9MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 6% 31.5M/492M [00:00<00:07, 61.1MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 9% 31.5M/335M [00:00<00:05, 58.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 1% 41.9M/3.44G [00:00<00:45, 75.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 9% 41.9M/492M [00:00<00:06, 69.9MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 13% 41.9M/335M [00:00<00:04, 68.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 2% 52.4M/3.44G [00:00<00:45, 74.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 11% 52.4M/492M [00:00<00:07, 57.2MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 16% 52.4M/335M [00:00<00:04, 58.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 2% 62.9M/3.44G [00:00<00:57, 59.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 13% 62.9M/492M [00:01<00:08, 51.7MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 19% 62.9M/335M [00:01<00:05, 50.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 2% 73.4M/3.44G [00:01<01:03, 52.9MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 15% 73.4M/492M [00:01<00:08, 49.2MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 22% 73.4M/335M [00:01<00:05, 50.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 2% 83.9M/3.44G [00:01<01:06, 50.3MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 17% 83.9M/492M [00:02<00:14, 28.2MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 25% 83.9M/335M [00:02<00:10, 22.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 3% 94.4M/3.44G [00:03<03:47, 14.7MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 28% 94.4M/335M [00:03<00:13, 17.2MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 19% 94.4M/492M [00:03<00:24, 16.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 3% 105M/3.44G [00:03<02:47, 19.9MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 21% 105M/492M [00:03<00:17, 21.7MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 31% 105M/335M [00:03<00:10, 22.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 3% 115M/3.44G [00:03<02:20, 23.7MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 23% 115M/492M [00:03<00:15, 24.3MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 34% 115M/335M [00:03<00:08, 24.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 4% 126M/3.44G [00:03<01:52, 29.4MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 26% 126M/492M [00:03<00:11, 31.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 4% 136M/3.44G [00:03<01:31, 36.3MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 38% 126M/335M [00:03<00:06, 30.5MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 28% 136M/492M [00:04<00:09, 38.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 4% 147M/3.44G [00:04<01:15, 43.4MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 41% 136M/335M [00:04<00:05, 34.7MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 30% 147M/492M [00:04<00:08, 40.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 5% 157M/3.44G [00:04<01:23, 39.3MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 44% 147M/335M [00:04<00:05, 32.8MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 32% 157M/492M [00:04<00:10, 32.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 5% 168M/3.44G [00:04<01:39, 32.9MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 47% 157M/335M [00:04<00:05, 29.9MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 34% 168M/492M [00:05<00:09, 32.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 5% 178M/3.44G [00:05<02:55, 18.6MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 50% 168M/335M [00:08<00:19, 8.53MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 36% 178M/492M [00:08<00:37, 8.34MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 5% 189M/3.44G [00:08<05:55, 9.14MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 53% 178M/335M [00:08<00:14, 10.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 6% 199M/3.44G [00:08<04:30, 12.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 38% 189M/492M [00:08<00:27, 10.9MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 56% 189M/335M [00:08<00:10, 14.0MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 60% 199M/335M [00:08<00:07, 18.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 6% 210M/3.44G [00:08<03:30, 15.3MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 40% 199M/492M [00:08<00:20, 14.1MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 43% 210M/492M [00:09<00:15, 18.5MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 63% 210M/335M [00:09<00:05, 23.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 6% 220M/3.44G [00:09<02:44, 19.6MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 66% 220M/335M [00:09<00:03, 29.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 45% 220M/492M [00:09<00:11, 23.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 7% 231M/3.44G [00:09<02:13, 24.1MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 47% 231M/492M [00:09<00:09, 28.4MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 69% 231M/335M [00:09<00:03, 31.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 7% 241M/3.44G [00:09<01:52, 28.4MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 72% 241M/335M [00:09<00:02, 34.7MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 49% 241M/492M [00:09<00:08, 30.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 7% 252M/3.44G [00:09<01:41, 31.3MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 51% 252M/492M [00:10<00:08, 28.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 53% 262M/492M [00:10<00:06, 35.2MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 75% 252M/335M [00:10<00:03, 26.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 8% 262M/3.44G [00:10<02:07, 24.9MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 58% 283M/492M [00:10<00:04, 49.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 8% 273M/3.44G [00:10<01:41, 31.2MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 81% 273M/335M [00:10<00:01, 39.5MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 60% 294M/492M [00:10<00:03, 55.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 8% 283M/3.44G [00:10<01:21, 38.8MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 85% 283M/335M [00:10<00:01, 45.6MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 62% 304M/492M [00:10<00:03, 60.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 9% 294M/3.44G [00:10<01:15, 41.9MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 88% 294M/335M [00:10<00:00, 49.0MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 64% 315M/492M [00:10<00:02, 60.2MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 91% 304M/335M [00:11<00:00, 52.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 9% 304M/3.44G [00:11<01:13, 42.6MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 66% 325M/492M [00:11<00:03, 42.0MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 94% 315M/335M [00:11<00:00, 42.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 9% 315M/3.44G [00:11<01:25, 36.5MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 97% 325M/335M [00:11<00:00, 47.8MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 68% 336M/492M [00:11<00:03, 46.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 9% 325M/3.44G [00:12<02:15, 22.9MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 100% 335M/335M [00:13<00:00, 15.8MB/s]</div>
<div></div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 100% 335M/335M [00:13<00:00, 25.1MB/s]</div>
<div></div>
<div>diffusion_pytorch_model.safetensors: 10% 336M/3.44G [00:13<03:03, 16.9MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 72% 357M/492M [00:13<00:06, 19.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 10% 346M/3.44G [00:13<02:31, 20.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 11% 367M/3.44G [00:13<01:32, 33.2MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 77% 377M/492M [00:13<00:04, 26.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 11% 388M/3.44G [00:13<01:05, 46.4MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 81% 398M/492M [00:14<00:02, 37.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 12% 409M/3.44G [00:14<00:53, 57.1MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 83% 409M/492M [00:14<00:01, 42.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 12% 419M/3.44G [00:14<00:51, 58.4MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 85% 419M/492M [00:14<00:01, 44.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 13% 430M/3.44G [00:14<00:52, 57.4MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 87% 430M/492M [00:14<00:01, 49.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 13% 440M/3.44G [00:14<00:49, 60.2MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 89% 440M/492M [00:14<00:00, 56.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 13% 451M/3.44G [00:14<00:44, 67.1MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 94% 461M/492M [00:14<00:00, 72.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 14% 472M/3.44G [00:15<00:38, 77.6MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 98% 482M/492M [00:15<00:00, 85.4MB/s]</div>
<div></div>
<div></div>
<div>model.safetensors: 100% 492M/492M [00:15<00:00, 86.3MB/s]</div>
<div>model.safetensors: 100% 492M/492M [00:15<00:00, 32.1MB/s]</div>
<div>Fetching 9 files: 56% 5/9 [00:16<00:13, 3.50s/it]</div>
<div>diffusion_pytorch_model.safetensors: 15% 503M/3.44G [00:15<00:36, 80.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 15% 514M/3.44G [00:15<00:42, 68.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 16% 535M/3.44G [00:15<00:35, 82.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 16% 556M/3.44G [00:15<00:31, 92.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 16% 566M/3.44G [00:16<00:33, 86.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 17% 577M/3.44G [00:16<00:56, 50.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 17% 598M/3.44G [00:16<00:42, 67.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 18% 608M/3.44G [00:16<00:39, 72.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 18% 629M/3.44G [00:17<00:33, 83.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 19% 650M/3.44G [00:17<00:27, 100MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 20% 671M/3.44G [00:17<00:24, 114MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 20% 692M/3.44G [00:17<00:23, 116MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 21% 713M/3.44G [00:17<00:24, 112MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 21% 734M/3.44G [00:17<00:23, 113MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 22% 755M/3.44G [00:18<00:23, 114MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 23% 776M/3.44G [00:18<00:23, 111MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 23% 797M/3.44G [00:18<00:22, 119MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 24% 818M/3.44G [00:18<00:21, 122MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 24% 839M/3.44G [00:18<00:20, 128MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 25% 860M/3.44G [00:18<00:20, 125MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 26% 881M/3.44G [00:19<00:19, 129MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 26% 902M/3.44G [00:19<00:19, 129MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 27% 923M/3.44G [00:19<00:18, 133MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 27% 944M/3.44G [00:19<00:18, 132MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 28% 965M/3.44G [00:19<00:19, 127MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 29% 986M/3.44G [00:19<00:20, 119MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 29% 1.01G/3.44G [00:20<00:21, 112MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 30% 1.03G/3.44G [00:20<00:21, 114MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 30% 1.05G/3.44G [00:20<00:20, 118MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 31% 1.07G/3.44G [00:20<00:19, 121MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 32% 1.09G/3.44G [00:20<00:19, 122MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 32% 1.11G/3.44G [00:20<00:18, 126MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 33% 1.13G/3.44G [00:21<00:20, 112MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 34% 1.15G/3.44G [00:21<00:21, 106MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 34% 1.17G/3.44G [00:21<00:24, 92.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 34% 1.18G/3.44G [00:21<00:26, 86.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 35% 1.20G/3.44G [00:22<00:26, 84.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 35% 1.21G/3.44G [00:22<00:25, 87.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 35% 1.22G/3.44G [00:22<00:25, 87.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 36% 1.23G/3.44G [00:23<01:15, 29.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 36% 1.24G/3.44G [00:23<01:02, 35.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 36% 1.25G/3.44G [00:23<00:52, 41.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 37% 1.27G/3.44G [00:23<00:36, 58.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 38% 1.29G/3.44G [00:23<00:29, 73.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 38% 1.31G/3.44G [00:24<00:24, 87.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 39% 1.33G/3.44G [00:25<01:06, 31.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 39% 1.35G/3.44G [00:25<00:49, 41.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 40% 1.37G/3.44G [00:25<00:38, 54.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 41% 1.39G/3.44G [00:26<00:31, 65.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 41% 1.42G/3.44G [00:26<00:25, 78.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 42% 1.44G/3.44G [00:26<00:23, 84.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 42% 1.46G/3.44G [00:26<00:23, 84.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 43% 1.48G/3.44G [00:26<00:22, 88.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 43% 1.49G/3.44G [00:26<00:21, 89.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 44% 1.51G/3.44G [00:27<00:18, 102MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 45% 1.53G/3.44G [00:27<00:23, 81.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 45% 1.55G/3.44G [00:27<00:19, 97.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 46% 1.57G/3.44G [00:27<00:17, 109MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 46% 1.59G/3.44G [00:27<00:15, 121MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 47% 1.61G/3.44G [00:28<00:14, 124MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 48% 1.64G/3.44G [00:28<00:14, 126MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 48% 1.66G/3.44G [00:28<00:13, 128MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 49% 1.68G/3.44G [00:28<00:13, 133MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 49% 1.70G/3.44G [00:28<00:12, 138MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 50% 1.72G/3.44G [00:30<00:45, 37.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 51% 1.74G/3.44G [00:30<00:34, 49.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 51% 1.76G/3.44G [00:30<00:27, 62.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 52% 1.78G/3.44G [00:30<00:29, 56.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 52% 1.80G/3.44G [00:30<00:23, 68.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 53% 1.82G/3.44G [00:31<00:19, 80.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 54% 1.85G/3.44G [00:31<00:17, 92.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 54% 1.87G/3.44G [00:31<00:15, 101MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 55% 1.89G/3.44G [00:31<00:14, 109MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 56% 1.91G/3.44G [00:31<00:13, 117MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 56% 1.93G/3.44G [00:31<00:12, 124MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 57% 1.95G/3.44G [00:32<00:11, 131MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 57% 1.97G/3.44G [00:32<00:10, 137MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 58% 1.99G/3.44G [00:32<00:10, 142MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 59% 2.01G/3.44G [00:32<00:10, 139MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 59% 2.03G/3.44G [00:32<00:10, 134MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 60% 2.06G/3.44G [00:32<00:10, 129MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 60% 2.08G/3.44G [00:32<00:10, 128MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 61% 2.10G/3.44G [00:33<00:10, 130MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 62% 2.12G/3.44G [00:33<00:22, 57.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 62% 2.13G/3.44G [00:35<00:53, 24.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 63% 2.15G/3.44G [00:35<00:37, 34.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 63% 2.17G/3.44G [00:35<00:27, 46.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 64% 2.19G/3.44G [00:35<00:21, 57.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 64% 2.21G/3.44G [00:36<00:17, 71.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 65% 2.23G/3.44G [00:36<00:13, 86.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 66% 2.25G/3.44G [00:36<00:12, 95.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 66% 2.28G/3.44G [00:36<00:11, 104MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 67% 2.30G/3.44G [00:36<00:10, 110MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 67% 2.32G/3.44G [00:36<00:09, 120MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 68% 2.34G/3.44G [00:37<00:08, 124MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 69% 2.36G/3.44G [00:37<00:08, 126MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 69% 2.38G/3.44G [00:37<00:07, 133MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 70% 2.40G/3.44G [00:37<00:07, 135MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 70% 2.42G/3.44G [00:37<00:08, 119MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 71% 2.44G/3.44G [00:37<00:09, 106MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 72% 2.46G/3.44G [00:38<00:09, 105MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 72% 2.49G/3.44G [00:38<00:09, 105MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 73% 2.51G/3.44G [00:38<00:08, 107MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 74% 2.53G/3.44G [00:40<00:33, 27.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 74% 2.55G/3.44G [00:40<00:24, 36.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 75% 2.57G/3.44G [00:40<00:19, 45.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 75% 2.59G/3.44G [00:41<00:15, 56.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 76% 2.61G/3.44G [00:41<00:12, 67.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 77% 2.63G/3.44G [00:41<00:10, 76.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 77% 2.65G/3.44G [00:41<00:09, 81.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 78% 2.67G/3.44G [00:41<00:08, 85.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 78% 2.69G/3.44G [00:42<00:08, 89.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 79% 2.72G/3.44G [00:42<00:07, 97.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 80% 2.74G/3.44G [00:42<00:06, 104MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 80% 2.76G/3.44G [00:42<00:06, 112MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 81% 2.78G/3.44G [00:42<00:05, 121MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 81% 2.80G/3.44G [00:42<00:04, 129MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 82% 2.82G/3.44G [00:44<00:17, 34.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 82% 2.83G/3.44G [00:44<00:16, 36.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 83% 2.84G/3.44G [00:44<00:14, 41.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 83% 2.85G/3.44G [00:44<00:12, 47.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 83% 2.86G/3.44G [00:46<00:28, 20.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 84% 2.87G/3.44G [00:46<00:22, 24.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 84% 2.89G/3.44G [00:46<00:14, 37.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 85% 2.92G/3.44G [00:48<00:23, 22.5MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 85% 2.93G/3.44G [00:48<00:19, 26.2MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 86% 2.95G/3.44G [00:48<00:15, 32.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 86% 2.97G/3.44G [00:48<00:10, 44.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 87% 2.99G/3.44G [00:49<00:07, 56.8MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 88% 3.01G/3.44G [00:49<00:06, 70.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 88% 3.03G/3.44G [00:49<00:04, 83.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 89% 3.05G/3.44G [00:49<00:04, 94.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 89% 3.07G/3.44G [00:49<00:03, 108MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 90% 3.09G/3.44G [00:49<00:02, 120MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 91% 3.11G/3.44G [00:49<00:02, 129MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 91% 3.14G/3.44G [00:50<00:02, 138MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 92% 3.16G/3.44G [00:51<00:08, 35.0MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 92% 3.18G/3.44G [00:52<00:06, 42.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 93% 3.19G/3.44G [00:52<00:05, 47.1MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 93% 3.20G/3.44G [00:52<00:04, 52.9MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 94% 3.22G/3.44G [00:52<00:03, 66.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 94% 3.24G/3.44G [00:52<00:02, 80.6MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 95% 3.26G/3.44G [00:52<00:01, 95.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 95% 3.28G/3.44G [00:52<00:01, 111MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 96% 3.30G/3.44G [00:52<00:01, 122MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 97% 3.32G/3.44G [00:53<00:00, 130MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 97% 3.34G/3.44G [00:53<00:00, 139MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 98% 3.37G/3.44G [00:53<00:00, 147MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 99% 3.39G/3.44G [00:54<00:01, 35.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 99% 3.41G/3.44G [00:55<00:00, 41.4MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 99% 3.42G/3.44G [00:55<00:00, 45.3MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 100% 3.43G/3.44G [00:55<00:00, 49.7MB/s]</div>
<div>diffusion_pytorch_model.safetensors: 100% 3.44G/3.44G [00:55<00:00, 61.7MB/s]</div>
<div>Fetching 9 files: 100% 9/9 [00:57<00:00, 6.34s/it]</div>
<div>Loading pipeline components…: 100% 5/5 [00:01<00:00, 2.73it/s]</div>
<div>You have disabled the safety checker for <class ‘diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline’> by passingsafety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .</div>
<div>UNet2DConditionModel: 64, 8, 768, False, False</div>
<div>U-Net converted to original U-Net</div>
<div>Enable memory efficient attention for U-Net</div>
<div>Traceback (most recent call last):</div>
<div> File “/content/kohya_ss/./train_network.py”, line 990, in <module></div>
<div> trainer.train(args)</div>
<div> File “/content/kohya_ss/./train_network.py”, line 222, in train</div>
<div> vae.set_use_memory_efficient_attention_xformers(args.xformers)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 263, in set_use_memory_efficient_attention_xformers</div>
<div> fn_recursive_set_mem_eff(module)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 259, in fn_recursive_set_mem_eff</div>
<div> fn_recursive_set_mem_eff(child)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 259, in fn_recursive_set_mem_eff</div>
<div> fn_recursive_set_mem_eff(child)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 259, in fn_recursive_set_mem_eff</div>
<div> fn_recursive_set_mem_eff(child)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py”, line 256, in fn_recursive_set_mem_eff</div>
<div> module.set_use_memory_efficient_attention_xformers(valid, attention_op)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py”, line 255, in set_use_memory_efficient_attention_xformers</div>
<div> raise ValueError(</div>
<div>ValueError: torch.cuda.is_available() should be True but is False. xformers’ memory efficient attention is only available for GPU</div>
<div>Traceback (most recent call last):</div>
<div> File “/usr/local/bin/accelerate”, line 8, in <module></div>
<div> sys.exit(main())</div>
<div> File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py”, line 47, in main</div>
<div> args.func(args)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py”, line 1017, in launch_command</div>
<div> simple_launcher(args)</div>
<div> File “/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py”, line 637, in simple_launcher</div>
<div> raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)</div>
<div>subprocess.CalledProcessError: Command ‘[‘/usr/bin/python3’, ‘./train_network.py’, ‘–pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5’, ‘–train_data_dir=/content/drive/MyDrive/AI_PICS/training/AndyLauGanesh’, ‘–resolution=512,650’, ‘–output_dir=/content/drive/MyDrive/AI_PICS/Lora’, ‘–network_alpha=64’, ‘–save_model_as=safetensors’, ‘–network_module=networks.lora’, ‘–text_encoder_lr=5e-05’, ‘–unet_lr=0.0001’, ‘–network_dim=64’, ‘–output_name=lastganeshhelp’, ‘–lr_scheduler_num_cycles=1’, ‘–no_half_vae’, ‘–learning_rate=0.0001’, ‘–lr_scheduler=constant’, ‘–train_batch_size=3’, ‘–max_train_steps=534’, ‘–save_every_n_epochs=1’, ‘–mixed_precision=bf16’, ‘–save_precision=bf16’, ‘–seed=1234’, ‘–caption_extension=.txt’, ‘–cache_latents’, ‘–optimizer_type=AdamW’, ‘–max_data_loader_n_workers=1’, ‘–clip_skip=2’, ‘–bucket_reso_steps=64’, ‘–mem_eff_attn’, ‘–xformers’, ‘–bucket_no_upscale’, ‘–noise_offset=0.05′]’ returned non-zero exit status 1.</div>
-
January 2, 2024 at 9:38 pm #10425
Looks like you are not using a GPU runtime on colab. Please select runtime like T4 and V100.
-
January 2, 2024 at 10:40 pm #10428
Thanks Andrew, will try and let you know the outcome
Regards,
Siva Manickam
-
-
AuthorPosts
- You must be logged in to reply to this topic.