This ComfyUI workflow copies the input image and generates a new one with the Flux.1 Dev model.


You can also add keywords to the prompt to modify the image.

You must be a member of this site to download the following ComfyUI workflow.
Table of Contents
Software
We will use ComfyUI, an alternative to AUTOMATIC1111.
Read the ComfyUI installation guide and ComfyUI beginner’s guide if you are new to ComfyUI.
Take the ComfyUI course to learn ComfyUI step-by-step.
How does this workflow work?
This workflow uses the BLIP visual model to extract a prompt from the input image.

You can optionally append the prompt to modify it.
The prompt is then used by the Flux.1 Dev model to produce an image.

Step-by-step guide
Step 1: Load workflow
Download the ComfyUI JSON workflow below.
Drag and drop the JSON file to ComfyUI.
Step 2: Install missing nodes
If you see nodes with red borders, you don’t have the custom nodes required for this workflow. You should have ComfyUI Manager installed before performing this step.
Click Manager > Install Missing Custom Nodes.
Install the nodes that are missing.
Restart ComfyUI.
Refresh the ComfyUI page.
Step 3: Download models
Download the flux1.Dev model flux1-dev-fp8.safetensors. Put it in ComfyUI > models > checkpoints.
Google Colab
If you use my ComfyUI Colab notebook, you don’t need to download the model. Select the Flux1_dev model.
Step 4: Upload an input image
Upload the image you want to copy to the Load Image node.

Step 5: Revise the prompt
Revise the prompt in the Fooocus Loader. You must put the desired text (E.g., “How are you?”) in the prompt.
Step 6: Run the workflow
Click the Queue button to run the workflow.

You should get an image similar to the input.

Optionally, change the seed value in the KSampler node to get a new one.

Step 7: Append prompt (optional)
Optionally, add prompt to the Append Prompt box to modify the image.

For example, adding “night” to the prompt casts the scene to a nighttime view.

I’m interested in learning to use Stable Diffusion to produce images. I have both Mac (Sequoia) and Windows (11) laptops. Which would you recommend that I use first? And, in addition to the book, what will I need to purchase to get started? Is the necessary software available for free use, or will I have an annual subscription to commit to? Last question (for now!): will I be limited in the kinds of images I will be able to produce, or will my options extend to anything legal that I want to portray?Thanks for sharing your experience and your knowledge!
DJB
A windows machine is better but you need an nvidia gpu card. I offer online courses to learn SD from ground up https://stable-diffusion-art.com/stable-diffusion-courses/
The license of use depends on the model but generally you can use the image or video output commercially.
The flux1.Dev model is a non-commercial use model, correct? Why does everyone use that for development (models/Loras) and tutorials instead of the schnell model? I don’t get it.
You cannot host the model commercially but you can use the result images commercially. See https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md section 2.4
Thanks, Andrew. I have read the license several times and was always left feeling like there was wiggle room for misunderstanding. I am going to take your word on it and stop questioning it. Appreciate the input!