How to Run Larger Diffusion Models With Low VRAM in ComfyUI: A Guide for Running the New Flux Model Using Less Than 12GB VRAM!

Learn how to run the new Flux model on a GPU with just 12GB VRAM using ComfyUI! This guide covers installation, setup, and optimizations, allowing you to handle large AI models with limited hardware resources. Perfect for AI enthusiasts with mid-range GPUs.

3 min read
How to Run Larger Diffusion Models With Low VRAM in ComfyUI: A Guide for Running the New Flux Model Using Less Than 12GB VRAM!

Running large models in ComfyUI can be challenging, especially when dealing with limited GPU memory (VRAM). However, by adding your system RAM, you can run models that exceed your GPU's VRAM capacity. Learn how in this article!


First, What is FLUX?

Black Forest Labs, the team that helped develop the original Stable Diffusion, recently has unveiled Flux, the largest open-source text-to-image model to date. Boasting an impressive 12 billion parameters, Flux produces visuals that rival those of Midjourney and may surpass any other model available, whether open-source or proprietary.

In this guide, I'll show you how to run the new Flux model on a GPU with just 12GB of VRAM (or less) in ComfyUI! We'll cover the necessary setup, including a small bonus for Linux users.


Let's begin.

Installation and Setup

1. Download the Required Files

2. Configure ComfyUI for Low VRAM Usage

To ensure the setup runs within the limits of a 12GB VRAM GPU, add the --lowvram argument when running ComfyUI: python main.py --lowvram. This setting directs the system to use system RAM to handle VRAM limitations.

3. Custom Script for Linux Users

As a Linux user, I streamline the process with addind a custom ./start.sh script in the ComfyUI folder. This script manages the virtual environment and always starts ComfyUI with the necessary parameters:

#!/bin/bash
# Custom script for activating the virtual environment and starting comfyui

# Create the virtual environment if it doesn't exist already
if [ ! -d ".venv" ]; then 
    python3 -m venv .venv
    source .venv/bin/activate
    pip install -r requirements.txt
fi

# Activate the virtual environment and start comfyui
source .venv/bin/activate
python main.py --lowvram

The script creates a virtual environment (.venv) which isolates the ComfyUI environment and helps managing dependencies, ensuring a smoother experience and no conflicts with your global python packages. You can adjust the script to fit your setup.

* Make sure to make the script executable before using it: chmod +x ./start.sh


4. Update and Run ComfyUI

Ensure your ComfyUI installation is up-to-date then start the web UI by simply running ./start.sh or python main.py --lowvram if you don't want to use isolated virtual env.

Download this workflow and load it in ComfyUI by either directly dragging it into the ComfyUI tab or clicking the "Load" button from the interface. After loading, adjust the nodes to use the models you've just downloaded.

This image can also be used as a ComfyUI workflow.

Finally, run the workflow by clicking on "Queue Prompt" button.

* Be patient; The generation process takes around 1 minute with 4 steps, but it's worth the waiting!


Hardware and Performance

My Setup:

  • CPU: Intel Core i5-13600KF
  • GPU: NVIDIA GeForce RTX 3060 with 12GB VRAM
  • Memory: 32GB DDR5 5200MHz RAM + Linux Swap File

Important Considerations

  • Memory Usage: The setup can consume nearly all available 32GB of RAM. Systems with less RAM may experience slowdowns or instability, especially during peak memory usage.
  • Text Encoding: The time for text encoding can vary. Notably, after a period of inactivity, the process can take up to 200 seconds. A more powerful CPU can help reduce this time.
  • Model Choice: For beginners, the Flux.1 Schnell model is a good starting point due to its simpler setup and fewer required steps.

Conclusion

Running larger models on a GPU with limited VRAM is achievable with the right setup and configurations. By compensation low VRAM with system RAM, you can effectively run advanced models like Flux on mid-range GPUs. This guide provides the necessary steps and considerations to ensure smooth operation, allowing you to explore the creative possibilities within ComfyUI.

Resources:

Harduex blog


Follow