How to install Flux AI model on ComfyUI

Updated Categorized as Tutorial Tagged , , , 29 Comments on How to install Flux AI model on ComfyUI

Flux is a family of text-to-image diffusion models developed by Black Forest Labs. As of Aug 2024, it is the best open-source image model you can run locally on your PC, surpassing the quality of SDXL and Stable Diffusion 3 medium.

The Flux.1 dev AI model has very good prompt adherence, generates high-quality images with correct anatomy, and is pretty good at generating text.

In this tutorial, you will learn how to install a few variants of the Flux models locally on your ComfyUI.

  • The single-file version for easy setup.
  • The fast version for speedy generation.
  • The regular version for the highest quality.

Note:

Software

We will use ComfyUI, an alternative to AUTOMATIC1111. You can use it on Windows, Mac, or Google Colab. If you prefer using a ComfyUI service, Think Diffusion offers our readers an extra 20% credit.

Read the ComfyUI beginner’s guide if you are new to ComfyUI. See the Quick Start Guide if you are new to AI images and videos.

Take the ComfyUI course to learn how to use ComfyUI step-by-step.

Flux AI Model variants

The following variants are available on ComfyUI

  1. Single-file FP8 version: This reduced-precision model is self-contained in a single checkpoint file. It is easy to use and requires less VRAM. (16GB)
  2. Flux Schnell version: A distilled 4-step model reducing quality in exchange for faster sampling times. (16GB)
  3. Regular FP16 full version: This is the full-precision version with slightly higher quality. You need higher VRAM. (24 GB)

Starts with the single-file FP8 version if this is your first time using Flux.

Single-file Flux FP8 model on ComfyUI

This is the easiest way to use Flux with a single checkpoint model file. ComfyUI has native support for Flux.

You need 16 GB of VRAM to run this workflow.

Step 1: Download the Flux AI model

Download the Flux1 dev FP8 checkpoint.

Put the model file in the folder ComfyUI > models > checkpoints.

Step 2: Update ComfyUI

ComfyUI has native support for Flux starting August 2024. You need to update your ComfyUI if you haven’t already since then.

The easiest way to update ComfyUI is through the ComfyUI Manager. Click Manager > Update All.

Make sure to reload the ComfyUI page after the update — Clicking the restart button is not enough.

Step 3: Load Flux dev workflow

Download the Flux1 dev FP8 workflow JSON file below.

Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.

Press Queue Prompt to generate an image.

Photo of a cute woman, in kitchen cooking turkey, looking at viewer, left hand fixing her beatuiful hair, holding a kitchen knife on the right hand.

Flux Fast model (Schnell)

The Flux Schnell is for you if you feel the Flux dev FP8 model is too slow. It is a distilled model with FP8 precision that can produce high quality images with 4 steps. The tradeoff is a bit lower quality.

You need 16 GB of VRAM to run this workflow.

Step 1: Download the Flux AI Fast model

Download the Flux1 Schnell model.

Put the model file in the folder ComfyUI > models > unet.

Step 2: Download the CLIP models

Download the following two CLIP models and put them in ComfyUI > models > clip.

Step 3: Download the VAE

Download the Flux VAE model file. Put it in ComfyUI > models > vae.

Step 4: Update ComfyUI

ComfyUI has native support for Flux starting August 2024. Update ComfyUI if you haven’t already.

The easiest way to update ComfyUI is through the ComfyUI Manager. Click Manager > Update All.

Make sure to reload the ComfyUI page after the update — Clicking the restart button is not enough.

Step 5: Load Flux Schnell workflow

Download the Flux1 Schnell workflow JSON file below.

Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.

Press Queue Prompt to generate an image.

Photo of a cute woman, in kitchen cooking turkey, looking at viewer, left hand fixing her beatuiful hair, holding a kitchen knife on the right hand.

The distilled model is not as good as the Flux1 dev FP8 model. The images are less coherent, and the quality is lower. So only use it if you need fast generation and tolerate lower quality.

Flux regular full model

Use this workflow if you have a GPU with 24 GB of VRAM and are willing to wait longer for the highest-quality image.

Step 1: Download the Flux Regular model

Go to the Flux dev model page and agree with the terms.

Download the Flux1 dev regular full model.

Put the model file in the folder ComfyUI > models > unet.

Step 2: Download the CLIP models

Download the following two CLIP models, and put them in ComfyUI > models > clip.

Step 3: Download the VAE

Download the Flux VAE model file. Put it in ComfyUI > models > vae.

Step 4: Update ComfyUI

ComfyUI has native support for Flux starting August 2024. Update ComfyUI if you haven’t already.

The easiest way to update ComfyUI is through the ComfyUI Manager. Click Manager > Update All.

Make sure to reload the ComfyUI page after the update — Clicking the restart button is not enough.

Step 5: Load Flux dev full workflow

Download the Flux1 dev regular full (FP16) workflow JSON file below.

Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.

Press Queue Prompt to generate an image.

Avatar

By Andrew

Andrew is an experienced engineer with a specialization in Machine Learning and Artificial Intelligence. He is passionate about programming, art, photography, and education. He has a Ph.D. in engineering.

29 comments

  1. Hi Andrew, why these is no negative prompt node in Flux workflow? Is it not necessary? If I have to use negative prompt where to write it?

  2. Hi there, I’ve followed all steps and only managing to get the fp8 version to work, the other ones I’m getting the errors below (running a RTX4070 16gb / 64gb RAM)

    Prompt outputs failed validation
    UNETLoader:
    – Value not in list: unet_name: ‘flux1-schnell-fp8.safetensors’ not in []

    and

    Prompt outputs failed validation
    UNETLoader:
    – Value not in list: unet_name: ‘flux1-dev.safetensors’ not in []

  3. “Drop it to your ComfyUI.” What folder??
    ——————-
    Step 5: Load Flux Schnell workflow
    Download the Flux1 Schnell workflow JSON file below.
    (flux1-schnell-fp8.json)
    Drop it to your ComfyUI. Update ComfyUI and reload the page if you see red boxes.

  4. Hi Andrew, I am on a MAC m3 with 16GB. When “queing” the prompt I get “KSampler BFloat16 is not supported on MPS” after a while. When “showing report” I get a verbose error message. This is part of it “^^^^^^^^^^^^^^^^^^^
    File “/Users/beratung3/Desktop/ComfyUI/comfy/samplers.py”, line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File “/Users/beratung3/Desktop/ComfyUI/comfy/samplers.py”, line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File “/Users/beratung3/Desktop/ComfyUI/comfy/model_base.py”, line 131, in apply_model
    xc = xc.to(dtype)” besides a very long dictionary.

    As always many thanks for your help!

    1. The error message said the model’s data type is not supported by Mac. I don’t have a powerful enough Mac to even try Flux so I have been running on windows. 🤷

  5. Under “Flux regular full model” > “Step 3: Download the VAE”

    The link for the text “Flux VAE model” points to Flux.1-schnell. Is that correct?

  6. Hello. Thank you fro this guide. I am getting the error:

    “ERROR: Could not detect model type of: D:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-dev-fp8.safetensors”

    I did everything you said. The only problem is there is no “manager” as you say in Step 2: Update ComfyUI. I only see a FOLDER called “update” and, after doing that, I have this error.

    What am I missing?

    I am using the single file checkpoint, the first you mentioned. I put it in the right folder but somehow it does not work.

    Please help

  7. Hi Andrew, Thank-you for the instructions. Will the installation instructions be significantly different for the AUTOMATIC1111 user interface?

      1. Thank-you Andrew for your response. I was wondering if A1111 was not supported, but was not sure. I will keep my eye out. Thank-you for the references. Good luck with the tutorial.

  8. You can run all of them if you have a 12gb card and 32gb of VRAM with no modifications to the workflows, it’ll just be a bit slow. People are making it run in even less vram

  9. I believe there’s a small copy/paste error under #5 in the Schnell setup. It repeats “dev” from the previous workflow.

  10. 真棒,很喜欢您的课程,对您的课程都是一字一字认真的观看,生怕错过什么。但是开通会员每年需要100美金,对我来说有点贵。请问在将来会有优惠期吗?

Leave a comment

Your email address will not be published. Required fields are marked *