Commit Graph

215 Commits

Author SHA1 Message Date
comfyanonymous
68d12b530e Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI 2023-05-14 15:39:39 -04:00
comfyanonymous
3a1f47764d Print the torch device that is used on startup. 2023-05-13 17:11:27 -04:00
BlenderNeko
1201d2eae5
Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
BlenderNeko
19c014f429 comment out annoying print statement 2023-05-12 23:57:40 +02:00
BlenderNeko
d9e088ddfd minor changes for tiled sampler 2023-05-12 23:49:09 +02:00
comfyanonymous
f7c0f75d1f Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
comfyanonymous
314e526c5c Not needed anymore because sampling works with any latent size. 2023-05-09 12:18:18 -04:00
comfyanonymous
c6e34963e4 Make t2i adapter work with any latent resolution. 2023-05-08 18:15:19 -04:00
comfyanonymous
a1f12e370d Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI 2023-05-07 17:19:03 -04:00
comfyanonymous
6fc4917634 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
comfyanonymous
678f933d38 maximum_batch_area for xformers.
Remove useless code.
2023-05-06 19:28:46 -04:00
EllangoK
8e03c789a2 auto-launch cli arg 2023-05-06 16:59:40 -04:00
comfyanonymous
cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2023-05-05 18:11:41 -04:00
comfyanonymous
af9cc1fb6a Search recursively in subfolders for embeddings. 2023-05-05 01:28:48 -04:00
comfyanonymous
6ee11d7bc0 Fix import. 2023-05-05 00:19:35 -04:00
comfyanonymous
bae4fb4a9d Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous
fcf513e0b6 Refactor. 2023-05-03 17:48:35 -04:00
comfyanonymous
a74e176a24 Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI 2023-05-03 16:24:56 -04:00
pythongosssss
5eeecf3fd5 remove unused import 2023-05-03 18:21:23 +01:00
pythongosssss
8912623ea9 use comfy progress bar 2023-05-03 18:19:22 +01:00
comfyanonymous
908dc1d5a8 Add a total_steps value to sampler callback. 2023-05-03 12:58:10 -04:00
pythongosssss
fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2023-05-03 17:33:42 +01:00
pythongosssss
27df74101e reduce duplication 2023-05-03 17:33:19 +01:00
comfyanonymous
93c64afaa9 Use sampler callback instead of tqdm hook for progress bar. 2023-05-02 23:00:49 -04:00
pythongosssss
06ad35b493 added progress to encode + upscale 2023-05-02 19:18:07 +01:00
comfyanonymous
ba8a4c3667 Change latent resolution step to 8. 2023-05-02 14:17:51 -04:00
comfyanonymous
66c8aa5c3e Make unet work with any input shape. 2023-05-02 13:31:43 -04:00
comfyanonymous
9c335a553f LoKR support. 2023-05-01 18:18:23 -04:00
comfyanonymous
d3293c8339 Properly disable all progress bars when disable_pbar=True 2023-05-01 15:52:17 -04:00
BlenderNeko
a2e18b1504 allow disabling of progress bar when sampling 2023-04-30 18:59:58 +02:00
comfyanonymous
071011aebe Mask strength should be separate from area strength. 2023-04-29 20:06:53 -04:00
comfyanonymous
870fae62e7 Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI 2023-04-29 15:05:18 -04:00
Jacob Segal
af02393c2a Default to sampling entire image
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.

I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
   used to index into tensors.
2023-04-29 00:16:58 -07:00
comfyanonymous
056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2023-04-29 00:28:48 -04:00
comfyanonymous
2ca934f7d4 You can now select the device index with: --directml id
Like this for example: --directml 1
2023-04-28 16:51:35 -04:00
comfyanonymous
3baded9892 Basic torch_directml support. Use --directml to use it. 2023-04-28 14:28:57 -04:00
Jacob Segal
e214c917ae Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2023-04-27 20:03:27 -07:00
comfyanonymous
5a971cecdb Add callback to sampler function.
Callback format is: callback(step, x0, x)
2023-04-27 04:38:44 -04:00
comfyanonymous
aa57136dae Some fixes to the batch masks PR. 2023-04-25 01:12:40 -04:00
comfyanonymous
c50208a703 Refactor more code to sample.py 2023-04-24 23:25:51 -04:00
comfyanonymous
7983b3a975 This is cleaner this way. 2023-04-24 22:45:35 -04:00
BlenderNeko
0b07b2cc0f gligen tuple 2023-04-24 21:47:57 +02:00
pythongosssss
c8c9926eeb Add progress to vae decode tiled 2023-04-24 11:55:44 +01:00
BlenderNeko
d9b1595f85 made sample functions more explicit 2023-04-24 12:53:10 +02:00
BlenderNeko
5818539743 add docstrings 2023-04-23 20:09:09 +02:00
BlenderNeko
8d2de420d3 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-23 20:02:18 +02:00
BlenderNeko
2a09e2aa27 refactor/split various bits of code for sampling 2023-04-23 20:02:08 +02:00
comfyanonymous
5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
6908f9c949 This makes pytorch2.0 attention perform a bit faster. 2023-04-22 14:30:39 -04:00
comfyanonymous
907010e082 Remove some useless code. 2023-04-20 23:58:25 -04:00