comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
bananasss00
e57d2282d1
Fix incorrect Content-Type for WebP images ( #6752 )
2025-02-11 04:48:35 -05:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast
( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
comfyanonymous
af93c8d1ee
Document which text encoder to use for lumina 2.
2025-02-08 06:57:25 -05:00
Raphael Walker
832e3f5ca3
Fix another small bug in attention_bias redux ( #6737 )
...
* fix a bug in the attn_masked redux code when using weight=1.0
* oh shit wait there was another bug
2025-02-07 14:44:43 -05:00
comfyanonymous
079eccc92a
Don't compress http response by default.
...
Remove argument to disable it.
Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
Raphael Walker
b6951768c4
fix a bug in the attn_masked redux code when using weight=1.0 ( #6721 )
2025-02-06 16:51:16 -05:00
Comfy Org PR Bot
fca304debf
Update frontend to v1.8.14 ( #6724 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-06 10:43:10 -05:00
comfyanonymous
14880e6dba
Remove some useless code.
2025-02-06 05:00:37 -05:00
Chenlei Hu
f1059b0b82
Remove unused GET /files API endpoint ( #6714 )
2025-02-05 18:48:36 -05:00
comfyanonymous
debabccb84
Bump ComfyUI version to v0.3.14
2025-02-05 15:48:13 -05:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301
Upcasting rope to fp32 seems to make no difference in this model.
2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5
Use regular numbers for rope in lumina model.
2025-02-05 04:17:25 -05:00
comfyanonymous
a57d635c5f
Fix lumina 2 batches.
2025-02-04 21:48:11 -05:00
comfyanonymous
016b219dcc
Add Lumina Image 2.0 to Readme.
2025-02-04 08:08:36 -05:00
comfyanonymous
8ac2dddeed
Lower the default shift of lumina to reduce artifacts.
2025-02-04 06:50:37 -05:00
comfyanonymous
3e880ac709
Fix on python 3.9
2025-02-04 04:20:56 -05:00
comfyanonymous
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
Raphael Walker
8d88bfaff9
allow searching for new .pt2 extension, which can contain AOTI compiled modules ( #6689 )
2025-02-03 17:07:35 -05:00
comfyanonymous
ed4d92b721
Model merging nodes for cosmos.
2025-02-03 03:31:39 -05:00
Comfy Org PR Bot
932ae8d9ca
Update frontend to v1.8.13 ( #6682 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-02 17:54:44 -05:00
comfyanonymous
44e19a28d3
Use maximum negative value instead of -inf for masks in text encoders.
...
This is probably more correct.
2025-02-02 09:46:00 -05:00
Dr.Lt.Data
0a0df5f136
better guide message for sageattention ( #6634 )
2025-02-02 09:26:47 -05:00
KarryCharon
24d6871e47
add disable-compres-response-body cli args; add compress middleware; ( #6672 )
2025-02-02 09:24:55 -05:00
comfyanonymous
9e1d301129
Only use stable cascade lora format with cascade model.
2025-02-01 06:35:22 -05:00
Terry Jia
768e035868
Add node for preview 3d animation ( #6594 )
...
* Add node for preview 3d animation
* remove bg_color param
* remove animation_speed param
2025-01-31 10:09:07 -08:00
Comfy Org PR Bot
669e0497ea
Update frontend to v1.8.12 ( #6662 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-01-31 10:07:37 -08:00
comfyanonymous
541dc08547
Update Readme.
2025-01-31 08:35:48 -05:00
comfyanonymous
8d8dc9a262
Allow batch of different sigmas when noise scaling.
2025-01-30 06:49:52 -05:00
comfyanonymous
2f98c24360
Update Readme with link to instruction for Nvidia 50 series.
2025-01-30 02:12:43 -05:00
comfyanonymous
ef85058e97
Bump ComfyUI version to v0.3.13
2025-01-29 16:07:12 -05:00
comfyanonymous
f9230bd357
Update the python version in some workflows.
2025-01-29 15:54:13 -05:00
comfyanonymous
537c27cbf3
Bump default cuda version in standalone package to 126.
2025-01-29 08:13:33 -05:00
comfyanonymous
6ff2e4d550
Remove logging call added in last commit.
...
This is called before the logging is set up so it messes up some things.
2025-01-29 08:08:01 -05:00
filtered
222f48c0f2
Allow changing folder_paths.base_path via command line argument. ( #6600 )
...
* Reimpl. CLI arg directly inside folder_paths.
* Update tests to use CLI arg mocking.
* Revert last-minute refactor.
* Fix test state polution.
2025-01-29 08:06:28 -05:00
comfyanonymous
13fd4d6e45
More friendly error messages for corrupted safetensors files.
2025-01-28 09:41:09 -05:00
Bradley Reynolds
1210d094c7
Convert latents_ubyte
to 8-bit unsigned int before converting to CPU ( #6300 )
...
* Convert latents_ubyte to 8-bit unsigned int before converting to CPU
* Only convert to unint8 if directml_enabled
2025-01-28 08:22:54 -05:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00