Hi 🙂
I currently use 2 lora models and hires.fix and get this error message:
OutOfMemoryError: CUDA out of memory. Tried to allocate 11.52 GiB. GPU 0 has a total capacity of 14.75 GiB of which 11.28 GiB is free. Process 403699 has 3.47 GiB memory in use. Of the allocated memory 2.41 GiB is allocated by PyTorch, and 908.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Any idea how I can get pass this? I have a paid collab account and should have enough computing power, or do I oversee something?