Commit Graph

334 Commits

Author SHA1 Message Date
comfyanonymous
91ed2815d5 Add a node to merge CLIP models. 2023-07-14 02:41:18 -04:00
comfyanonymous
b2f03164c7 Prevent the clip_g position_ids key from being saved in the checkpoint.
This is to make it match the official checkpoint.
2023-07-12 20:15:02 -04:00
comfyanonymous
46dc050c9f Fix potential tensors being on different devices issues. 2023-07-12 19:29:27 -04:00
comfyanonymous
606a537090 Support SDXL embedding format with 2 CLIP. 2023-07-10 10:34:59 -04:00
comfyanonymous
6ad0a6d7e2 Don't patch weights when multiplier is zero. 2023-07-09 17:46:56 -04:00
comfyanonymous
d5323d16e0 latent2rgb matrix for SDXL. 2023-07-09 13:59:09 -04:00
comfyanonymous
0ae81c03bb Empty cache after model unloading for normal vram and lower. 2023-07-09 09:56:03 -04:00
comfyanonymous
d3f5998218 Support loading clip_g from diffusers in CLIP Loader nodes. 2023-07-09 09:33:53 -04:00
comfyanonymous
a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. 2023-07-08 22:31:10 -04:00
comfyanonymous
bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI 2023-07-07 01:52:25 -04:00
comfyanonymous
e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 2023-07-06 23:23:46 -04:00
comfyanonymous
608fcc2591 Fix bug with weights when prompt is long. 2023-07-06 02:43:40 -04:00
comfyanonymous
ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous
603f02d613 Fix loras not working when loading checkpoint with config. 2023-07-05 19:42:24 -04:00
comfyanonymous
af7a49916b Support loading unet files in diffusers format. 2023-07-05 17:38:59 -04:00
comfyanonymous
e57cba4c61 Add gpu variations of the sde samplers that are less deterministic
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous
f81b192944 Add logit scale parameter so it's present when saving the checkpoint. 2023-07-04 23:01:28 -04:00
comfyanonymous
acf95191ff Properly support SDXL diffusers loras for unet. 2023-07-04 21:15:23 -04:00
mara
c61a95f9f7 Fix size check for conditioning mask
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous
8d694cc450 Fix issue with OSX. 2023-07-04 02:09:02 -04:00
comfyanonymous
c3e96e637d Pass device to CLIP model. 2023-07-03 16:09:37 -04:00
comfyanonymous
5e6bc824aa Allow passing custom path to clip-g and clip-h. 2023-07-03 15:45:04 -04:00
comfyanonymous
dc9d1f31c8 Improvements for OSX. 2023-07-03 00:08:30 -04:00
comfyanonymous
103c487a89 Cleanup. 2023-07-02 11:58:23 -04:00
comfyanonymous
2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 2023-07-02 10:00:57 -04:00
comfyanonymous
6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 2023-07-01 22:42:35 -04:00
comfyanonymous
1c1b0e7299 --gpu-only now keeps the VAE on the device. 2023-07-01 15:22:40 -04:00
comfyanonymous
ce35d8c659 Lower latency by batching some text encoder inputs. 2023-07-01 15:07:39 -04:00
comfyanonymous
3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 2023-07-01 14:38:51 -04:00
comfyanonymous
b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
5a9ddf94eb LoraLoader node now caches the lora file between executions. 2023-06-29 23:40:51 -04:00
comfyanonymous
9920367d3c Fix embeddings not working with --gpu-only 2023-06-29 20:43:06 -04:00
comfyanonymous
62db11683b Move unet to device right after loading on highvram mode. 2023-06-29 20:43:06 -04:00
comfyanonymous
4376b125eb Remove useless code. 2023-06-29 00:26:33 -04:00
comfyanonymous
89120f1fbe This is unused but it should be 1280. 2023-06-28 18:04:23 -04:00
comfyanonymous
2c7c14de56 Support for SDXL text encoder lora. 2023-06-28 02:22:49 -04:00
comfyanonymous
fcef47f06e Fix bug. 2023-06-28 00:38:07 -04:00
comfyanonymous
8248babd44 Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous
9b93b920be Add CheckpointSave node to save checkpoints.
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.

Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32

Anything that patches the model weights like merging or loras will be
saved.

The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
b72a7a835a Support loras based on the stability unet implementation. 2023-06-26 02:56:11 -04:00
comfyanonymous
c71a7e6b20 Fix ddim + inpainting not working. 2023-06-26 00:48:48 -04:00
comfyanonymous
4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 2023-06-25 03:04:57 -04:00
comfyanonymous
cef6aa62b2 Add support for TAESD decoder for SDXL. 2023-06-25 02:38:14 -04:00
comfyanonymous
20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
b7933960bb Fix CLIPLoader node. 2023-06-24 13:56:46 -04:00
comfyanonymous
78d8035f73 Fix bug with controlnet. 2023-06-24 11:02:38 -04:00
comfyanonymous
05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
fa28d7334b Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous
8607c2d42d Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00