comfyanonymous
debabccb84
Bump ComfyUI version to v0.3.14
2025-02-05 15:48:13 -05:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301
Upcasting rope to fp32 seems to make no difference in this model.
2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5
Use regular numbers for rope in lumina model.
2025-02-05 04:17:25 -05:00
comfyanonymous
a57d635c5f
Fix lumina 2 batches.
2025-02-04 21:48:11 -05:00
comfyanonymous
016b219dcc
Add Lumina Image 2.0 to Readme.
2025-02-04 08:08:36 -05:00
comfyanonymous
8ac2dddeed
Lower the default shift of lumina to reduce artifacts.
2025-02-04 06:50:37 -05:00
comfyanonymous
3e880ac709
Fix on python 3.9
2025-02-04 04:20:56 -05:00
comfyanonymous
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
Raphael Walker
8d88bfaff9
allow searching for new .pt2 extension, which can contain AOTI compiled modules ( #6689 )
2025-02-03 17:07:35 -05:00
comfyanonymous
ed4d92b721
Model merging nodes for cosmos.
2025-02-03 03:31:39 -05:00
Comfy Org PR Bot
932ae8d9ca
Update frontend to v1.8.13 ( #6682 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-02 17:54:44 -05:00
comfyanonymous
44e19a28d3
Use maximum negative value instead of -inf for masks in text encoders.
...
This is probably more correct.
2025-02-02 09:46:00 -05:00
Dr.Lt.Data
0a0df5f136
better guide message for sageattention ( #6634 )
2025-02-02 09:26:47 -05:00
KarryCharon
24d6871e47
add disable-compres-response-body cli args; add compress middleware; ( #6672 )
2025-02-02 09:24:55 -05:00
comfyanonymous
9e1d301129
Only use stable cascade lora format with cascade model.
2025-02-01 06:35:22 -05:00
Terry Jia
768e035868
Add node for preview 3d animation ( #6594 )
...
* Add node for preview 3d animation
* remove bg_color param
* remove animation_speed param
2025-01-31 10:09:07 -08:00
Comfy Org PR Bot
669e0497ea
Update frontend to v1.8.12 ( #6662 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-01-31 10:07:37 -08:00
comfyanonymous
541dc08547
Update Readme.
2025-01-31 08:35:48 -05:00
comfyanonymous
8d8dc9a262
Allow batch of different sigmas when noise scaling.
2025-01-30 06:49:52 -05:00
comfyanonymous
2f98c24360
Update Readme with link to instruction for Nvidia 50 series.
2025-01-30 02:12:43 -05:00
comfyanonymous
ef85058e97
Bump ComfyUI version to v0.3.13
2025-01-29 16:07:12 -05:00
comfyanonymous
f9230bd357
Update the python version in some workflows.
2025-01-29 15:54:13 -05:00
comfyanonymous
537c27cbf3
Bump default cuda version in standalone package to 126.
2025-01-29 08:13:33 -05:00
comfyanonymous
6ff2e4d550
Remove logging call added in last commit.
...
This is called before the logging is set up so it messes up some things.
2025-01-29 08:08:01 -05:00
filtered
222f48c0f2
Allow changing folder_paths.base_path via command line argument. ( #6600 )
...
* Reimpl. CLI arg directly inside folder_paths.
* Update tests to use CLI arg mocking.
* Revert last-minute refactor.
* Fix test state polution.
2025-01-29 08:06:28 -05:00
comfyanonymous
13fd4d6e45
More friendly error messages for corrupted safetensors files.
2025-01-28 09:41:09 -05:00
Bradley Reynolds
1210d094c7
Convert latents_ubyte
to 8-bit unsigned int before converting to CPU ( #6300 )
...
* Convert latents_ubyte to 8-bit unsigned int before converting to CPU
* Only convert to unint8 if directml_enabled
2025-01-28 08:22:54 -05:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
comfyanonymous
4f011b9a00
Better CLIPTextEncode error when clip input is None.
2025-01-26 06:04:57 -05:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
comfyanonymous
6d21740346
Print ComfyUI version.
2025-01-25 15:03:57 -05:00
comfyanonymous
7fbf4b72fe
Update nightly pytorch ROCm command in Readme.
2025-01-24 06:15:54 -05:00
comfyanonymous
14ca5f5a10
Remove useless code.
2025-01-24 06:15:54 -05:00
filtered
ce557cfb88
Remove redundant code ( #6576 )
2025-01-23 05:57:41 -05:00
comfyanonymous
96e2a45193
Remove useless code.
2025-01-23 05:56:23 -05:00
Chenlei Hu
dfa2b6d129
Remove unused function lcm in conds.py ( #6572 )
2025-01-23 05:54:09 -05:00
Terry Jia
f3566f0894
remove some params from load 3d node ( #6436 )
2025-01-22 17:23:51 -05:00
Chenlei Hu
ca69b41cee
Add utils/ to web server developer codeowner ( #6570 )
2025-01-22 17:16:54 -05:00
Chenlei Hu
a058f52090
[i18n] Add /i18n endpoint to provide all custom node translations ( #6558 )
...
* [i18n] Add /i18n endpoint to provide all custom node translations
* Sort glob result for deterministic ordering
* Update comment
2025-01-22 17:15:45 -05:00
comfyanonymous
d6bbe8c40f
Remove support for python 3.8.
2025-01-22 17:04:30 -05:00
comfyanonymous
a7fe0a94de
Refactor and fixes for video latents.
2025-01-22 06:37:46 -05:00
chaObserv
e857dd48b8
Add gradient estimation sampler ( #6554 )
2025-01-22 05:29:40 -05:00
comfyanonymous
d303cb5341
Add missing case to CLIPLoader.
2025-01-21 08:57:04 -05:00
comfyanonymous
fb2ad645a3
Add FluxDisableGuidance node to disable using the guidance embed.
2025-01-20 14:50:24 -05:00
comfyanonymous
d8a7a32779
Cleanup old TODO.
2025-01-20 03:44:13 -05:00
comfyanonymous
a00e1489d2
LatentBatch fix for video latents
2025-01-19 06:02:14 -05:00
Sergii Dymchenko
ebf038d4fa
Use torch.special.expm1
( #6388 )
...
* Use `torch.special.expm1`
This function provides greater precision than `exp(x) - 1` for small values of `x`.
Found with TorchFix https://github.com/pytorch-labs/torchfix/
* Use non-alias
2025-01-19 04:54:32 -05:00
Comfy Org PR Bot
b4de04a1c1
Update frontend to v1.7.14 ( #6522 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-01-18 21:43:37 -05:00
catboxanon
b1a02131c9
Remove comfy.samplers self-import ( #6506 )
2025-01-18 17:49:51 -05:00