comfyanonymous
1fc00ba4b6
Make hidream work with any latent resolution.
2025-04-16 18:34:14 -04:00
comfyanonymous
9899d187b1
Limit T5 to 128 tokens for HiDream: #7620
2025-04-16 18:07:55 -04:00
comfyanonymous
f00f340a56
Reuse code from flux model.
2025-04-16 17:43:55 -04:00
Chenlei Hu
cce1d9145e
[Type] Mark input options NotRequired ( #7614 )
2025-04-16 15:41:00 -04:00
comfyanonymous
b4dc03ad76
Fix issue on old torch.
2025-04-16 04:53:56 -04:00
comfyanonymous
9ad792f927
Basic support for hidream i1 model.
2025-04-15 17:35:05 -04:00
comfyanonymous
6fc5dbd52a
Cleanup.
2025-04-15 12:13:28 -04:00
comfyanonymous
3e8155f7a3
More flexible long clip support.
...
Add clip g long clip support.
Text encoder refactor.
Support llama models with different vocab sizes.
2025-04-15 10:32:21 -04:00
comfyanonymous
8a438115fb
add RMSNorm to comfy.ops
2025-04-14 18:00:33 -04:00
chaObserv
e51d9ba5fc
Add SEEDS (stage 2 & 3 DP) sampler ( #7580 )
...
* Add seeds stage 2 & 3 (DP) sampler
* Change the name to SEEDS in comment
2025-04-12 18:36:08 -04:00
catboxanon
1714a4c158
Add CublasOps support ( #7574 )
...
* CublasOps support
* Guard CublasOps behind --fast arg
2025-04-12 18:29:15 -04:00
Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines ( #7509 )
...
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.
* remove test code
* tidy code
2025-04-11 06:55:51 -04:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput ( #7551 )
...
* Deprecate InputTypeOptions.defaultInput
* nit
* nit
2025-04-10 06:57:06 -04:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape ( #7500 )
2025-04-09 09:43:35 -04:00
comfyanonymous
70d7242e57
Support the wan fun reward loras.
2025-04-07 05:01:47 -04:00
comfyanonymous
3bfe4e5276
Support 512 siglip model.
2025-04-05 07:01:01 -04:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models ( #7482 )
...
* Add activations_shape info in UNet models
* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55
Disable partial offloading of audio VAE.
2025-04-04 21:24:56 -04:00
BiologicalExplosion
2222cf67fd
MLU memory optimization ( #7470 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
BVH
301e26b131
Add option to store TE in bf16 ( #7461 )
2025-04-01 13:48:53 -04:00
comfyanonymous
a3100c8452
Remove useless code.
2025-03-29 20:12:56 -04:00
comfyanonymous
2d17d8910c
Don't error if wan concat image has extra channels.
2025-03-28 08:49:29 -04:00
comfyanonymous
0a1f8869c9
Add WanFunInpaintToVideo node for the Wan fun inpaint models.
2025-03-27 11:13:27 -04:00
comfyanonymous
3661c833bc
Support the WAN 2.1 fun control models.
...
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
comfyanonymous
8edc1f44c1
Support more float8 types.
2025-03-25 05:23:49 -04:00
comfyanonymous
e471c726e5
Fallback to pytorch attention if sage attention fails.
2025-03-22 15:45:56 -04:00
comfyanonymous
d9fa9d307f
Automatically set the right sampling type for lotus.
2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation ( #7125 )
...
* draft pass at a native comfy implementation of Lotus-D depth and normal est
* fix model_sampling kludges
* fix ruff
---------
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
comfyanonymous
3872b43d4b
A few fixes for the hunyuan3d models.
2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7
Fix orientation of hunyuan 3d model.
2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab
Initial Hunyuan3Dv2 implementation.
...
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
comfyanonymous
3b19fc76e3
Allow disabling pe in flux code for some other models.
2025-03-18 05:09:25 -04:00
comfyanonymous
50614f1b79
Fix regression with clip vision.
2025-03-17 13:56:11 -04:00
comfyanonymous
6dc7b0bfe3
Add support for giant dinov2 image encoder.
2025-03-17 05:53:54 -04:00
comfyanonymous
e8e990d6b8
Cleanup code.
2025-03-16 06:29:12 -04:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload ( #7253 )
...
* Call unpatch_hooks at the start of ModelPatcher.partially_unload
* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00
chaObserv
fd5297131f
Guard the edge cases of noise term in er_sde ( #7265 )
2025-03-16 06:02:25 -04:00
comfyanonymous
55a1b09ddc
Allow loading diffusion model files with the "Load Checkpoint" node.
2025-03-15 08:27:49 -04:00
comfyanonymous
3c3988df45
Show a better error message if the VAE is invalid.
2025-03-15 08:26:36 -04:00
comfyanonymous
a2448fc527
Remove useless code.
2025-03-14 18:10:37 -04:00
comfyanonymous
6a0daa79b6
Make the SkipLayerGuidanceDIT node work on WAN.
2025-03-14 10:55:19 -04:00
FeepingCreature
9c98c6358b
Tolerate missing @torch.library.custom_op
( #7234 )
...
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. ( #7223 )
...
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93
Fix.
2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed
Print mac version.
2025-03-13 10:05:40 -04:00
Chenlei Hu
9b6cd9b874
[NodeDef] Add documentation on multi_select input option ( #7212 )
2025-03-12 17:29:39 -04:00
chaObserv
3fc688aebd
Ensure the extra_args in dpmpp sde series ( #7204 )
2025-03-12 17:28:59 -04:00
chaObserv
01015bff16
Add er_sde sampler ( #7187 )
2025-03-12 02:42:37 -04:00
comfyanonymous
ca8efab79f
Support control loras on Wan.
2025-03-10 17:23:13 -04:00
comfyanonymous
9aac21f894
Fix issues with new hunyuan img2vid model and bumb version to v0.3.26
2025-03-09 05:07:22 -04:00