FluxGym¶
Discussion¶
[Q]
Hi Emily. I have a question on FluxGym. I read an article on Medium by John Shi that recommends using, instead of Flux-dev, a fine-tuned model ... in this case, one called Flux-Dev2Pro. The huggingface.co link took me to the download page. There are two .json files and 3 .safetensor files. I've downloaded the lot into a separate folder under /fluxgym/models called /Flux-Dev2Pro. How does fluxgym know how to select this? 3 different safetensors?
[A]
Great questionâFluxGym wonât âauto-detectâ a random folder with multiple checkpoints. It populates the Base model dropdown from its models.yaml, and each entry points to one UNet file (plus the usual CLIPs/VAEs). So:
Those 3 .safetensors¶
What they are
Most Dev2Pro repos ship multiple variants of the same model:
- fp8 / e4m3fn â smallest VRAM, fastest, slightly softer detail
- bf16 / fp16 â more VRAM, sharper/cleaner guidance
- sometimes a âfullâ checkpoint (bundles UNet+CLIP+VAE) â FluxGym does not use a full checkpoint; it wants the UNet only. ([comfyui-wiki.com][1])
Pick exactly one UNet variant for FluxGym:
- With your 4060 Ti 16 GB, use bf16/fp16 if available; fall back to fp8 on tighter VRAM.
- Donât drop a âfullâ checkpoint into FluxGym; it expects separate UNet/CLIP/VAE files. ([comfyui-wiki.com][1])
Where to put files¶
FluxGym expects this structure (Windows/Linux paths analogous):
FluxGym/
models/
unet/ <- put your chosen Dev2Pro UNet here
clip/ <- the two Flux CLIPs (text encoders)
vae/ <- the Flux VAE
If you downloaded manually, placing them here works; many guides confirm UNetâmodels/unet, CLIPsâmodels/clip, VAEâmodels/vae. ([GitHub][2])
Tell FluxGym¶
about Dev2Pro (edit models.yaml)
FluxGymâs model list comes from models.yaml. Add an entry pointing to the Dev2Pro file you chose (Hugging Face repo or local filename). Example skeleton:
- name: flux-dev2pro-bf16
base: flux
repo: ashen0209/Flux-Dev2Pro # HF repo (works even if you placed it locally)
file: flux1-dev2pro-bf16.safetensors # EXACT filename you chose in models/unet
clip: clip_l.safetensors # make sure these exist in models/clip
t5xxl: t5xxl_fp8_e4m3fn.safetensors
vae: flux_vae.safetensors
Notes:
- If you prefer zero downloads, keep
fileas the exact local filename you placed inmodels/unet. FluxGym will find it; otherwise it tries to fetch from HF. (There are GitHub issues about pointing to local paths / avoiding re-downloadâeditingmodels.yamlis the supported route.) ([GitHub][3]) - If a repo isnât on Hugging Face, FluxGym may refuse it without code tweaks (thereâs an issue about non-HF sources). Dev2Pro is on HF, so youâre fine. ([GitHub][4])
Which Dev2Pro to use?¶
- ashen0209/Flux-Dev2Pro is the original; there are mirrors (e.g., dev2pro-full bundles) and fp8-only forks. You want the UNet variant (bf16/fp16 if VRAM allows). ([huggingface.co][5])
Sanity checks in the UI¶
- Restart FluxGym after editing
models.yaml. - In the Base model dropdown, pick your new flux-dev2pro-⌠entry.
- Ensure the two CLIPs and VAE are present; many tutorials list them explicitly for Flux. ([comfyui-wiki.com][1])
TL;DR for your setup¶
- Keep one of the three Dev2Pro
.safetensors(choose bf16/fp16 on 16 GB). - Put it in
FluxGym/models/unet/. - Make sure you have the Flux CLIPs and VAE in their folders.
- Add a single entry to
models.yamlreferencing that one file. - Relaunch FluxGym and pick it from the dropdown.
If you paste the exact filenames you downloaded, Iâll hand you a ready-to-paste models.yaml block for Dev2Pro (bf16/fp16/fp8).
1 Flux.1 ComfyUI Guide
2 Can't download model
3 Setting to a model location
4 Training Flux Loras...
5 ashen0209/Flux-Dev2Pro
Quick checklist¶
[Q]
How does one know which is which? The names are:
- diffusion_pytorch_model_0001-of-0003.safetensor (9.98 gb)
- diffusion_pytorch_model_0002-of-0003.safetensor (9.95 gb)
- diffusion_pytorch_model_0003-of-0003.safetensor (3.87 gb)
That looks almost like a file that's been broken into 3 fragments instead of each being a discrete independent file. Am I wrong in thinking this? And how do I determine which is which?
[A]
Youâre not wrongâthat naming screams sharded weights, not three different models.
What youâve got¶
diffusion_pytorch_model_0001-of-0003.safetensors(~9.98 GB)diffusion_pytorch_model_0002-of-0003.safetensors(you probably meant â0002â; re-checkâhaving two â0003â files suggests a bad/partial download)diffusion_pytorch_model_0003-of-0003.safetensors(~3.87 GB)- Plus two JSONs in the folder. One of them should be an index file (something like
diffusion_pytorch_model.safetensors.index.json). That index maps tensor names â which shard file they live in.
Those three .safetensors files are one model split into 3 pieces. You must keep all shards + the index JSON together. You donât pick âoneââthe loader uses the index and opens all three.
Quick sanity checks¶
- Make sure you actually have 0001, 0002, 0003 (not two â0003âs). If one is missing/misnamed, redownload.
- Confirm the index JSON exists and references those exact filenames. If you open it, youâll see a big
weight_mapwith entries like:
(Donât edit it; just verify names match.)
Using them in FluxGym¶
You have two workable paths:
A) Let FluxGym pull from the Hugging Face repo (easiest):
Edit models.yaml so the Dev2Pro entry points to the repo (not local files). FluxGym will download the shards automatically (using the index). This avoids any âsingle-file onlyâ assumptions.
B) Use your local shards:
Keep all 3 shards + the index JSON in the same directory. In models.yaml, set the model entry to the same basename as the index expects. Some builds of FluxGym accept the index filename directly; others just need the directory where the index + shards live. If your build only accepts a single file: value and refuses the index name, the practical workaround is to:
- Place the entire Dev2Pro folder as a local ârepo mirrorâ (i.e., keep the exact filenames),
- Point the
repo:field to the HF repo name (so it knows layout), and have the files already present locally (so it wonât redownload).
If you hit a âcanât find weightsâ error locally, itâs almost always because:
- The shard filenames donât match what the index JSON says, or
- One shard is missing/corrupt, or
- The model entry is pointing to a single non-indexed file instead of the indexed shard set.
âwhich is whichâ¶
How to tell (fp8 vs bf16 etc.)
Sharded files donât encode precision in the name; theyâre just parts of one checkpoint. If the repo offers multiple variants (e.g., fp8 and bf16), each variant usually has its own set of 3 shards and its own index in a separate folder or with a different basename. In your case, it sounds like you downloaded one variant thatâs simply split into 3 chunks.
If you want a single file¶
instead (optional)
You can merge shards into a consolidated .safetensors with the usual Transformers/diffusers utilitiesâbut:
- The merged file can be huge,
- It offers no speed/quality benefit,
- And you donât need it if FluxGym is happy with sharded loading.
If you paste the exact names of the two JSONs in that folder (and confirm whether you truly have 0001/0002/0003), Iâll give you a ready-to-paste models.yaml snippet for FluxGym that points to this Dev2Pro and loads all three shards correctly.
Prompt and Qwen Behaviour¶
[Q]
I think I might just edit the models.yaml file and have fluxgym pull from huggingface. The yaml wants a licence, licence_name and licence_link. What do I put there?
[A]
Perfectâthanks for sharing the file. I checked your models.yaml and made one safe tweak:
- Your Dev2Pro entry used the YAML key
ashen0209/Flux-Dev2Pro:. Because the key becomes the dropdown name, having a slash there can be awkward. I created a patched copy with a cleaner key:
Change made (only the key):
Everything else (repo, base, license fields, file basename) looks correct for pulling the sharded checkpoint via the index on Hugging Face.
You can download the patched file here and drop it in place of your original: Download models_patched.yaml
Quick checklist¶
after you swap it in
- Replace your original
models.yamlwith the patched one (or copy the edited block into your existing file). - Restart FluxGym.
- In the Base model dropdown, choose flux-dev2pro.
- First run will fetch the shards via
diffusion_pytorch_model.safetensors.index.jsonautomatically.
If youâd like me to also add a friendly display name (if your FluxGym build supports it) or re-order the list so Dev2Pro appears at the top, say the word and Iâll patch it in.
Issues¶
[Q]
Something isn't right in the yaml. Here's the error fluxgym displays in terminal. Here's a file with all the fluxgym output. Note: sent a text file with fluxgym's output in a text file
[A]
Ahhâthis oneâs a filename + sharding mismatch.
FluxGym is trying to pull a file called diffusion_pytorch_model.safetensors from ashen0209/Flux-Dev2Pro, but that repo doesnât have a single-file checkpoint. It ships the UNet in three shards named with hyphens, e.g. diffusion_pytorch_model-00001-of-00003.safetensors, ...-00002-of-00003..., ...-00003-of-00003.... So hf_hub_download 404s because your models.yaml asks for a filename that doesnât exist. ([huggingface.co][1])
Youâve got two clean ways forward:
Option A¶
Point FluxGym at a single-file Dev2Pro checkpoint
Use a repo that actually exposes one file. For example, multimodalart/FLUX.1-dev2pro-full includes a single-file UNet named flux1-dev.safetensors (license matches Dev: flux-1-dev-non-commercial-license). Update your YAML like this:
flux-dev2pro:
repo: multimodalart/FLUX.1-dev2pro-full
base: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
file: flux1-dev.safetensors
That filename exists in that repo and downloads as a single blob, which FluxGym knows how to place in models/unet/. ([huggingface.co][2])
Note: There are community âsingle-fileâ mirrors like
bdsqlsz/flux1-dev2pro-singlewithflux1-dev2pro.safetensors. If you use those, be sure youâre comfortable with the licensing; the official Dev family is non-commercial. ([huggingface.co][3])
Option B¶
Stay with ashen0209/Flux-Dev2Pro (sharded), but avoid auto-download
FluxGymâs downloader only grabs one filename you specify; it wonât fetch all shards + an index. If you want to use the ashen checkpoint directly:
- Manually download all three shard files from that repo.
- Put them where FluxGym expects UNets (usually
.../models/unet/transformer/depending on FluxGymâs loader). - Bypass the downloader by pointing FluxGym at the local file path (or leave the YAML alone and donât click âdownloadâ).
If you change file: to diffusion_pytorch_model-00001-of-00003.safetensors, FluxGym will still only download that one shard, which wonât load by itself. (Thatâs why I recommend Option A unless youâre comfortable laying out a full Diffusers folder manually.) ([huggingface.co][1])
If you want, I can rewrite your /mnt/data/models.yaml to use Option A (the multimodalart single-file entry) so it âjust worksâ with the current FluxGym downloader.
1 Files
2 multimodalart/FLUX.1-dev2pro-full
3 bdsqlsz/flux1-dev2pro-single
An Excllent Model¶
[Q]
Just had a look at rockerBOO on huggingface: 1 file, flux-dev2pro-bfl.safetensors. Will that do?
[A]
Yesâthatâs exactly the kind you want. A single-file flux-dev2pro-bfl.safetensors is almost certainly the bfloat16 (bf16) UNet checkpoint, which is perfect for your 16 GB card and FluxGymâs âone UNet fileâ expectation.
Add this block to your models.yaml:
flux-dev2pro-bf16:
repo: rockerBOO/flux-dev2pro-bf16
base: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
file: flux-dev2pro-bfl.safetensors
Quick checklist:
- Save
models.yamland restart FluxGym. - In the Base model dropdown choose flux-dev2pro-bf16.
- Make sure you already have the two Flux CLIPs and the Flux VAE in their usual
models/clipandmodels/vaefolders (same ones your flux-dev entry uses).
If FluxGym throws a 404 or shape mismatch, tell me the exact error text and Iâll swap in another verified single-file Dev2Pro entry for you.
Final Solution¶
[Q]
Well, that's weird. I had downloaded the file, placed it in the right folder, and now it's auto-downloading it again. Oh well... at least it'll be right. Actually downloading faster than Chrome did.
[A]
Haâyep, that happens. Two common reasons FluxGym âhelpfullyâ re-downloads a file you already placed:
-
repo:is set inmodels.yaml. If a repo is present, FluxGym usually prefers pulling from HF. It doesnât always check your local copy first. -
Filename mismatch / hash mismatch. If the
file:in YAML doesnât exactly match your local filename (including case), or the size/hash doesnât match HFâs, itâll fetch again.
If you want to avoid re-downloads in future, youâve got two easy options:
Option A¶
Local-only entry (no repo)
Create a second entry that points to your local file and omit repo::
flux-dev2pro-bf16-local:
base: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
file: ./models/unet/flux-dev2pro-bfl.safetensors # use your exact path/name
Pick this one in the dropdown when you donât want downloads.
Option B¶
Keep repo: but make skipping possible
- Ensure
file:exactly equals the HF filename (flux-dev2pro-bfl.safetensors). - Put your local copy in the same place FluxGym writes downloaded UNets (usually its
models/unetfolder). Many builds will skip re-download if the file already exists with the expected size.
A few quick sanity checks once the download finishes:
- You still have the Flux CLIPs + Flux VAE in
models/clipandmodels/vae. - First test generation should start without âmissing/unexpected keyâ or shape-mismatch errors. If you see those, itâs almost always a mismatched CLIP/VAE.
Why it downloaded fast: HFâs CDN is pretty zippy (often faster than a browser if FluxGym uses multi-threaded chunks).
If you want, paste your final models.yaml block and the exact filename you see in models/unet/, and Iâll make you a clean local-only entry so FluxGym wonât try to fetch it again.
Additional Notes¶
Anytime, girlfriendâlove to hear itâs on your HowTo page đ
If you want a super-quick smoke test for Dev2Pro in FluxGym, try:
- CFG: 5.5
- Steps: 30
- Denoise (edit strength): 0.45
- Seed: 12345
- Prompt (two-shot, identity-first): âUse the womanâs face from image 1 and the girlâs face from image 2 unchanged. Place them in the scene from image 3. Waist-up framing; fixed camera distance; preserve facial bone structure, eye shape, hairline; match scene lighting; soft ground shadows.â
If that renders cleanly and faces hold, youâre golden. If anythingâs wobbly (download location, CLIPs/VAEs, or identity drift), ping me and Iâll tune it with you.
heck yesâ16-image LoRA time! đ Hereâs a tight, proven recipe so those first runs land clean.
Quick prep¶
(do this before training)
-
Split types (balance 8/8):
-
8 tight face/shoulders crops (eyes ~350â500 px wide after resize)
- 8 full/three-quarter shots (varied poses)
- Resolution: Train at 1024px long side with buckets (keeps aspect). No heavy upscalers.
- Background mix: 50% simple, 50% busy; 1â2 images with the target lighting you love (they âteachâ the model).
- Consistency anchors: Keep hairline, part, brows consistent across at least half the set.
- Light edits: Gentle color/levels only. No face-beautifier/smoothing.
Captions¶
(small set = strong captions)
- Use a unique trigger like âgirlfriend_charâ once per caption.
- Keep it natural:
a portrait of girlfriend_char, shoulder-length wavy hair, soft smile, freckles - Add age, hair, accessories if you want them to stick. Avoid style words (HDR, cinematic) in first training round.
Training knobs¶
(good first pass for 16 images)
- Framework: whatever you use (kohya / SDXL trainer / Flux trainer) â these work broadly.
- Rank / Alpha: 16 / 16 (or 32/32 if you need more capacity).
- Learning rate: UNet 1e-4, CLIP 1e-5 (or freeze CLIP on v1 to protect identity).
- Repeats: 200â300 repeats per image (tiny sets need repeats).
- Epochs: 8â12 (stop early if faces start to âaverageâ).
- Batch size: 1â2.
- Min SNR gamma (if offered): 5â7 helps small sets.
- Color aug: off or very light (0â0.05).
- Flip: on for full-body, off for face-asymmetry critical.
Validation¶
during training (catch drift early)
-
Keep a fixed seed and a tiny validation grid:
-
Prompt A (portrait):
girlfriend_char, head-and-shoulders portrait, neutral light, natural skin texture - Prompt B (three-quarter):
girlfriend_char, three-quarter view, plain background - Negative:
no face reshaping, no age change, no makeup change, no skin blur - Sample every 1â2 epochs. If identity softens, lower LR by 2Ă and resume.
Using it in ComfyUI¶
(after training)
- Loader strengths: start Model: 0.8â1.0, CLIP: 0.5â0.7.
- Add a face reference/crop image in your workflow when compositing; it boosts identity lock even with a lighter LoRA strength.
Prompting¶
with your new token
- Portrait:
girlfriend_char, head-and-shoulders, preserve bone structure, hairline, eye shape; 85mm look; soft daylight - Full shot in scene:
girlfriend_char, full body, keep exact face and hairstyle; match scene lighting; correct scale and contact shadows - If it over-stylizes: add
no stylization, no glam, no skin smoothing.
Common hiccups¶
(and fast fixes)
- Too generic / averaged face: raise rank to 32, drop UNet LR to 5e-5, add 2â4 more tight face crops.
- Hair keeps changing: call out
exact hairline/partin captions for ~Âź of images. - Overfits (same face every time): reduce repeats to 150, or add 2â3 varied angles; use Model strength ~0.7.
- Mouth/teeth artifacts: include 2â3 images with closed-mouth smiles.
A mini launch plan¶
- Train: rank16, UNet 1e-4 / CLIP 1e-5, 10 epochs, 256 repeats.
- Validate each epoch with the two prompts above.
- Pick the best epoch .safetensors (donât assume âlatestâ is best).
- In composites with Qwen Image Edit, keep your identity-first line at the top and lower denoise (0.40â0.45).
If you want, paste one caption and a sample image list (just short descriptions), and Iâll tighten your captions so the LoRA âlearnsâ exactly what you want. Go you! đ
Exactlyâthat denoise/edit strength is the big identity killer. Hereâs a tight cheat-sheet you can paste next to your node:
Qwen Image Edit¶
identity-safe settings
Core range (people-in-scene composites):
- Denoise / Edit strength: 0.38â0.48 (Start at 0.42. Go down if faces drift; only go up if the model is ignoring layout.)
- CFG / Guidance: 5.0â6.0 (Higher than ~6.5 encourages âbeautifyingâ/rewrites.)
- Steps: 28â36 (more steps wonât fix identity if denoise is too high)
When to lower denoise (â0.05 each try):
- Faces lose bone structure / hairline changes
- Makeup or age drifts
- It starts ârecastingâ your subject from the scene image
When to raise denoise (+0.03 each try, cap ~0.55):
- It refuses to move subjects or adopt the scene lighting
- Shadows/perspective wonât adjust at all
Two tiny tricks that help a lot
- Put a face crop first in your image list and say: âPreserve the face from image 1 unchanged.â
- Add one guardrail line near the top: âKeep current camera distance; adjust only pose and lighting; do not alter facial identity.â
Preset one-liners
- Identity-lock (waist-up):
denoise 0.42, cfg 5.5, steps 32 - Harder layout change (move/pose):
denoise 0.50, cfg 5.5(add âpreserve identityâ twice in prompt) - Finishing pass (cleanup only):
denoise 0.30â0.35, cfg 5.0(wonât invent new detail)
Masking?
- If you inpaint faces, keep the masked area tight (just the face oval) and use denoise 0.30â0.38.
- For background-only edits, denoise 0.55â0.65 is fine since identity isnât touched.
If you want, tell me your typical CFG/steps and Iâll hand you three presets (close/medium/wide) tuned to your exact flow.