comfyanonymous
170c7bb90c
Fix contiguous issue with pytorch nightly. ( #8729 )
2025-06-29 06:38:40 -04:00
comfyanonymous
396454fa41
Reorder the schedulers so simple is the default one. ( #8722 )
2025-06-28 18:12:56 -04:00
xufeng
ba9548f756
“--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes ( #8592 )
...
* feat: “--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes
* feat: Simplify custom nodes whitelist logic to use consistent code paths
2025-06-28 15:24:02 -04:00
comfyanonymous
c36be0ea09
Fix memory estimation bug with kontext. ( #8709 )
2025-06-27 17:21:12 -04:00
comfyanonymous
9093301a49
Don't add tiny bit of random noise when VAE encoding. ( #8705 )
...
Shouldn't change outputs but might make things a tiny bit more
deterministic.
2025-06-27 14:14:56 -04:00
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. ( #8679 )
2025-06-26 11:28:41 -04:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. ( #8672 )
2025-06-26 03:39:09 -04:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. ( #8669 )
2025-06-25 19:35:57 -04:00
comfyanonymous
7a13f74220
unet -> diffusion model ( #8659 )
2025-06-25 04:52:34 -04:00
chaObserv
8042eb20c6
Singlestep DPM++ SDE for RF ( #8627 )
...
Refactor the algorithm, and apply alpha scaling.
2025-06-24 14:59:09 -04:00
comfyanonymous
1883e70b43
Fix exception when using a noise mask with cosmos predict2. ( #8621 )
...
* Fix exception when using a noise mask with cosmos predict2.
* Fix ruff.
2025-06-21 03:30:39 -04:00
comfyanonymous
f7fb193712
Small flux optimization. ( #8611 )
2025-06-20 05:37:32 -04:00
comfyanonymous
7e9267fa77
Make flux controlnet work with sd3 text enc. ( #8599 )
2025-06-19 18:50:05 -04:00
comfyanonymous
91d40086db
Fix pytorch warning. ( #8593 )
2025-06-19 11:04:52 -04:00
chaObserv
8e81c507d2
Multistep DPM++ SDE samplers for RF ( #8541 )
...
Include alpha in sampling and minor refactoring
2025-06-16 14:47:10 -04:00
comfyanonymous
e1c6dc720e
Allow setting min_length with tokenizer_data. ( #8547 )
2025-06-16 13:43:52 -04:00
comfyanonymous
7ea79ebb9d
Add correct eps to ltxv rmsnorm. ( #8542 )
2025-06-15 12:21:25 -04:00
comfyanonymous
d6a2137fc3
Support Cosmos predict2 image to video models. ( #8535 )
...
Use the CosmosPredict2ImageToVideoLatent node.
2025-06-14 21:37:07 -04:00
chaObserv
53e8d8193c
Generalize SEEDS samplers ( #8529 )
...
Restore VP algorithm for RF and refactor noise_coeffs and half-logSNR calculations
2025-06-14 16:58:16 -04:00
comfyanonymous
29596bd53f
Small cosmos attention code refactor. ( #8530 )
2025-06-14 05:02:05 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme ( #8446 )
2025-06-13 19:25:59 -04:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. ( #8518 )
2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. ( #8517 )
2025-06-13 07:05:23 -04:00
pythongosssss
50c605e957
Add support for sqlite database ( #8444 )
...
* Add support for sqlite database
* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. ( #8496 )
...
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. ( #8495 )
2025-06-11 15:09:11 -04:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. ( #8485 )
2025-06-10 13:06:24 -04:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. ( #8474 )
...
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 ( #8464 )
2025-06-08 14:15:34 -04:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. ( #8453 )
2025-06-07 10:01:15 -04:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 ( #8442 )
2025-06-06 09:43:27 -04:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. ( #8429 )
2025-06-05 10:07:17 -04:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. ( #8373 )
2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. ( #8358 )
2025-05-31 20:00:20 -04:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma ( #8279 )
2025-05-30 18:09:54 -04:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. ( #8348 )
2025-05-30 15:41:02 -04:00
comfyanonymous
f2289a1f59
Delete useless file. ( #8327 )
2025-05-29 08:29:37 -04:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. ( #8321 )
2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. ( #8318 )
2025-05-28 18:47:15 -04:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. ( #8307 )
2025-05-27 15:09:05 -04:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. ( #8294 )
2025-05-26 16:39:27 -04:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 ( #8282 )
2025-05-26 04:29:25 -04:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. ( #8272 )
2025-05-25 05:28:11 -04:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. ( #8264 )
2025-05-24 06:43:12 -04:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. ( #8257 )
...
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
drhead
30b2eb8a93
create arange on-device ( #8255 )
2025-05-23 16:15:06 -04:00
comfyanonymous
f85c08df06
Make VACE conditionings stackable. ( #8240 )
2025-05-22 19:22:26 -04:00
comfyanonymous
87f9130778
Revert "This doesn't seem to be needed on chroma. ( #8209 )" ( #8210 )
...
This reverts commit 7e84bf53737879ace37a68dc93e0df7704a53514.
2025-05-20 05:39:55 -04:00
comfyanonymous
7e84bf5373
This doesn't seem to be needed on chroma. ( #8209 )
2025-05-20 05:29:23 -04:00
comfyanonymous
aee2908d03
Remove useless log. ( #8166 )
2025-05-17 06:27:34 -04:00