not bad

# I made a 1-click app to run FLUX.2-klein on M-series Macs (8GB+ unified memory) : r/StableDiffusion

Been working on making fast image generation accessible on Apple Silicon. Just open-sourced it.

**What it does:**

\- Text-to-image generation

\- Image-to-image editing (upload a photo, describe changes)

\- Runs locally on your Mac - no cloud, no API keys

**Models included:**

\- FLUX.2-klein-4B (Int8 quantized) - 8GB, great quality, supports img2img

\- Z-Image Turbo (Quantized) - 3.5GB, fastest option

\- Z-Image Turbo (Full) - LoRA support

**How fast?**

\- ~8 seconds for 512x512 on Apple Silicon

\- 4 steps default (it's distilled)

**Requirements:**

\- M1/M2/M3/M4 Mac with 16GB+ RAM (8GB works but tight)

\- macOS

**To run:**

1.  Clone the repo
    
2.  Double-click Launch.command
    
3.  First run auto-installs everything
    
4.  Browser opens with the UI
    

That's it. No conda, no manual pip installs, no fighting with dependencies.

GitHub: [https://github.com/newideas99/ultra-fast-image-gen](https://github.com/newideas99/ultra-fast-image-gen)

The FLUX.2-klein model is int8 quantized (I uploaded it to HuggingFace), which cuts memory from ~22GB to ~8GB while keeping quality nearly identical.

Would love feedback.

---
> **Note:** This page contains 1 cross-origin iframe(s) that could not be accessed due to browser security policies. Some content may be missing. Links to these iframes have been preserved where possible.


---
Source: [I made a 1-click app to run FLUX.2-klein on M-series Macs (8GB+ unified memory) : r/StableDiffusion](https://www.reddit.com/r/StableDiffusion/comments/1qdzj2t/i_made_a_1click_app_to_run_flux2klein_on_mseries/)