Flux is a family of text-to-image diffusion models developed by Black Forest Labs. As of Aug 2024, it is the best open-source image model you can run locally on your PC, surpassing the quality of SDXL and Stable Diffusion 3 medium.
The Flux.1 dev AI model has very good prompt adherence, generates high-quality images with correct anatomy, and is pretty good at generating text.
In this tutorial, you will learn how to install a few variants of the Flux models locally on your ComfyUI.
- The single-file version for easy setup.
- The fast version for speedy generation.
- The regular version for the highest quality.
Note:
- Flux is currently unavailable on AUTOMATIC1111, but you can use Flux on Forge with a similar interface.
- Use Flux on Forge if your GPU card has low VRAM (6GB).
Table of Contents
Software
We will use ComfyUI, an alternative to AUTOMATIC1111. You can use it on Windows, Mac, or Google Colab. If you prefer using a ComfyUI service, Think Diffusion offers our readers an extra 20% credit.
Read the ComfyUI beginner’s guide if you are new to ComfyUI. See the Quick Start Guide if you are new to AI images and videos.
Take the ComfyUI course to learn how to use ComfyUI step-by-step.
Flux AI Model variants
The following variants are available on ComfyUI
- Single-file FP8 version: This reduced-precision model is self-contained in a single checkpoint file. It is easy to use and requires less VRAM. (16GB)
- Flux Schnell version: A distilled 4-step model reducing quality in exchange for faster sampling times. (16GB)
- Regular FP16 full version: This is the full-precision version with slightly higher quality. You need higher VRAM. (24 GB)
Starts with the single-file FP8 version if this is your first time using Flux.
Single-file Flux FP8 model on ComfyUI
This is the easiest way to use Flux with a single checkpoint model file. ComfyUI has native support for Flux.
You need 16 GB of VRAM to run this workflow.
Step 1: Download the Flux AI model
Download the Flux1 dev FP8 checkpoint.
Put the model file in the folder ComfyUI > models > checkpoints.
Step 2: Update ComfyUI
ComfyUI has native support for Flux starting August 2024. You need to update your ComfyUI if you haven’t already since then.
The easiest way to update ComfyUI is through the ComfyUI Manager. Click Manager > Update All.
Make sure to reload the ComfyUI page after the update — Clicking the restart button is not enough.
Step 3: Load Flux dev workflow
Download the Flux1 dev FP8 workflow JSON file below.
Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.
Press Queue Prompt to generate an image.
Photo of a cute woman, in kitchen cooking turkey, looking at viewer, left hand fixing her beatuiful hair, holding a kitchen knife on the right hand.
Flux Fast model (Schnell)
The Flux Schnell is for you if you feel the Flux dev FP8 model is too slow. It is a distilled model with FP8 precision that can produce high quality images with 4 steps. The tradeoff is a bit lower quality.
You need 16 GB of VRAM to run this workflow.
Step 1: Download the Flux AI Fast model
Download the Flux1 Schnell model.
Put the model file in the folder ComfyUI > models > unet.
Step 2: Download the CLIP models
Download the following two CLIP models and put them in ComfyUI > models > clip.
Step 3: Download the VAE
Download the Flux VAE model file. Put it in ComfyUI > models > vae.
Step 4: Update ComfyUI
ComfyUI has native support for Flux starting August 2024. Update ComfyUI if you haven’t already.
The easiest way to update ComfyUI is through the ComfyUI Manager. Click Manager > Update All.
Make sure to reload the ComfyUI page after the update — Clicking the restart button is not enough.
Step 5: Load Flux Schnell workflow
Download the Flux1 Schnell workflow JSON file below.
Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.
Press Queue Prompt to generate an image.
Photo of a cute woman, in kitchen cooking turkey, looking at viewer, left hand fixing her beatuiful hair, holding a kitchen knife on the right hand.
The distilled model is not as good as the Flux1 dev FP8 model. The images are less coherent, and the quality is lower. So only use it if you need fast generation and tolerate lower quality.
Flux regular full model
Use this workflow if you have a GPU with 24 GB of VRAM and are willing to wait longer for the highest-quality image.
Step 1: Download the Flux Regular model
Go to the Flux dev model page and agree with the terms.
Download the Flux1 dev regular full model.
Put the model file in the folder ComfyUI > models > unet.
Step 2: Download the CLIP models
Download the following two CLIP models, and put them in ComfyUI > models > clip.
Step 3: Download the VAE
Download the Flux VAE model file. Put it in ComfyUI > models > vae.
Step 4: Update ComfyUI
ComfyUI has native support for Flux starting August 2024. Update ComfyUI if you haven’t already.
The easiest way to update ComfyUI is through the ComfyUI Manager. Click Manager > Update All.
Make sure to reload the ComfyUI page after the update — Clicking the restart button is not enough.
Step 5: Load Flux dev full workflow
Download the Flux1 dev regular full (FP16) workflow JSON file below.
Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.
Press Queue Prompt to generate an image.
Hi Andrew, why these is no negative prompt node in Flux workflow? Is it not necessary? If I have to use negative prompt where to write it?
Hi, negative prompt is not supported in the flux dev/schnell models because they are the sped-up versions.
Hi there, I’ve followed all steps and only managing to get the fp8 version to work, the other ones I’m getting the errors below (running a RTX4070 16gb / 64gb RAM)
Prompt outputs failed validation
UNETLoader:
– Value not in list: unet_name: ‘flux1-schnell-fp8.safetensors’ not in []
and
Prompt outputs failed validation
UNETLoader:
– Value not in list: unet_name: ‘flux1-dev.safetensors’ not in []
Make sure these models are downloaded to the folder ComfyUI > models > unet
Click Refresh
Sorry about my lack of attention. Thank you!
Hello this works thanks! How would I add a custom LORA to this setup?
Thanks
It was still pretty cumbersome to use LoRA on ComfyUI when I wrote this. I will have a follow up tutorial soon.
“Drop it to your ComfyUI.” What folder??
——————-
Step 5: Load Flux Schnell workflow
Download the Flux1 Schnell workflow JSON file below.
(flux1-schnell-fp8.json)
Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.
Drop it to the ComfyUI page on your browser.
Hi Andrew, I am on a MAC m3 with 16GB. When “queing” the prompt I get “KSampler BFloat16 is not supported on MPS” after a while. When “showing report” I get a verbose error message. This is part of it “^^^^^^^^^^^^^^^^^^^
File “/Users/beratung3/Desktop/ComfyUI/comfy/samplers.py”, line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/Users/beratung3/Desktop/ComfyUI/comfy/samplers.py”, line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/Users/beratung3/Desktop/ComfyUI/comfy/model_base.py”, line 131, in apply_model
xc = xc.to(dtype)” besides a very long dictionary.
As always many thanks for your help!
The error message said the model’s data type is not supported by Mac. I don’t have a powerful enough Mac to even try Flux so I have been running on windows. 🤷
Under “Flux regular full model” > “Step 3: Download the VAE”
The link for the text “Flux VAE model” points to Flux.1-schnell. Is that correct?
Good catch. The two model uses the same VAE file. I have corrected the link to avoid confusion.
Hello. Thank you fro this guide. I am getting the error:
“ERROR: Could not detect model type of: D:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-dev-fp8.safetensors”
I did everything you said. The only problem is there is no “manager” as you say in Step 2: Update ComfyUI. I only see a FOLDER called “update” and, after doing that, I have this error.
What am I missing?
I am using the single file checkpoint, the first you mentioned. I put it in the right folder but somehow it does not work.
Please help
What you did sound right. You can try installing the comfyui manager. https://stable-diffusion-art.com/comfyui-manager-install/
and repeat step 2 and on.
If that still fails, you can try installing a new comfyui portable folder…
Hi Andrew, is it working on mac?
I see diffusion bee supports Flux. Not sure about Forge and comfyui.
Hi Andrew, Thank-you for the instructions. Will the installation instructions be significantly different for the AUTOMATIC1111 user interface?
Not supported by A1111 yet. Here’s the ticket. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16311
But you can use it with Forge. A tutorial on this will be coming out soon.
Thank-you Andrew for your response. I was wondering if A1111 was not supported, but was not sure. I will keep my eye out. Thank-you for the references. Good luck with the tutorial.
You can run all of them if you have a 12gb card and 32gb of VRAM with no modifications to the workflows, it’ll just be a bit slow. People are making it run in even less vram
I believe there’s a small copy/paste error under #5 in the Schnell setup. It repeats “dev” from the previous workflow.
you are right, thanks!
真棒,很喜欢您的课程,对您的课程都是一字一字认真的观看,生怕错过什么。但是开通会员每年需要100美金,对我来说有点贵。请问在将来会有优惠期吗?
I am glad you like them! They takes time to write and the yearly rate is already discounted, so unfortunately I can’t set to lower.
Yes please, More flux contents!
I am lining up a few!
Great Quality! Is an Image to Image Workflow also possible with Flux?
Yes, using Forge is the easiest way right now. (Installation tutorial coming soon)