Commit Graph

483 Commits

Author SHA1 Message Date
comfyanonymous
c20583286f Support diffuser text encoder loras. 2023-08-10 20:28:28 -04:00
comfyanonymous
cf10c5592c Disable calculating uncond when CFG is 1.0 2023-08-09 20:55:03 -04:00
comfyanonymous
1f0f4cc0bd Add argument to disable auto launching the browser. 2023-08-07 02:25:12 -04:00
comfyanonymous
d8e58f0a7e Detect hint_channels from controlnet. 2023-08-06 14:08:59 -04:00
comfyanonymous
c5d7593ccf Support loras in diffusers format. 2023-08-05 01:40:24 -04:00
comfyanonymous
1ce0d8ad68 Add CMP 30HX card to the nvidia_16_series list. 2023-08-04 12:08:45 -04:00
comfyanonymous
c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 2023-08-03 20:27:50 -04:00
comfyanonymous
4a77fcd6ab Only shift text encoder to vram when CPU cores are under 8. 2023-07-31 00:08:54 -04:00
comfyanonymous
3cd31d0e24 Lower CPU thread check for running the text encoder on the CPU vs GPU. 2023-07-30 17:18:24 -04:00
comfyanonymous
2b13939044 Remove some useless code. 2023-07-30 14:13:33 -04:00
comfyanonymous
95d796fc85 Faster VAE loading. 2023-07-29 16:28:30 -04:00
comfyanonymous
4b957a0010 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00
comfyanonymous
c910b4a01c Remove unused code and torchdiffeq dependency. 2023-07-28 21:32:27 -04:00
comfyanonymous
1141029a4a Add --disable-metadata argument to disable saving metadata in files. 2023-07-28 12:31:41 -04:00
comfyanonymous
fbf5c51c1c Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI 2023-07-27 16:13:48 -04:00
comfyanonymous
68be24eead Remove some prints. 2023-07-27 16:12:43 -04:00
asagi4
1ea4d84691 Fix timestep ranges when batch_size > 1 2023-07-27 21:14:09 +03:00
comfyanonymous
5379051d16 Fix diffusers VAE loading. 2023-07-26 18:26:39 -04:00
comfyanonymous
727588d076 Fix some new loras. 2023-07-25 16:39:15 -04:00
comfyanonymous
4f9b6f39d1 Fix potential issue with Save Checkpoint. 2023-07-25 00:45:20 -04:00
comfyanonymous
5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2023-07-24 18:38:17 -04:00
comfyanonymous
7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous
d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous
0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2023-07-24 09:25:02 -04:00
comfyanonymous
22f29d66ca Try to fix memory issue with lora. 2023-07-22 21:38:56 -04:00
comfyanonymous
67be7eb81d Nodes can now patch the unet function. 2023-07-22 17:01:12 -04:00
comfyanonymous
12a6e93171 Del the right object when applying lora. 2023-07-22 11:25:49 -04:00
comfyanonymous
78e7958d17 Support controlnet in diffusers format. 2023-07-21 22:58:16 -04:00
comfyanonymous
09386a3697 Fix issue with lora in some cases when combined with model merging. 2023-07-21 21:27:27 -04:00
comfyanonymous
58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2023-07-21 14:38:56 -04:00
comfyanonymous
0115018695 Print errors and continue when lora weights are not compatible. 2023-07-20 19:56:22 -04:00
comfyanonymous
4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI 2023-07-20 00:34:54 -04:00
comfyanonymous
0b284f650b Fix typo. 2023-07-19 10:20:32 -04:00
comfyanonymous
e032ca6138 Fix ddim issue with older torch versions. 2023-07-19 10:16:00 -04:00
comfyanonymous
18885f803a Add MX450 and MX550 to list of cards with broken fp16. 2023-07-19 03:08:30 -04:00
comfyanonymous
9ba440995a It's actually possible to torch.compile the unet now. 2023-07-18 21:36:35 -04:00
comfyanonymous
51d5477579 Add key to indicate checkpoint is v_prediction when saving. 2023-07-18 00:25:53 -04:00
comfyanonymous
ff6b047a74 Fix device print on old torch version. 2023-07-17 15:18:58 -04:00
comfyanonymous
9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
2023-07-17 15:12:10 -04:00
comfyanonymous
55d0fca9fa --windows-standalone-build now enables --cuda-malloc 2023-07-17 14:10:36 -04:00
comfyanonymous
1679abd86d Add a command line argument to enable backend:cudaMallocAsync 2023-07-17 11:00:14 -04:00
comfyanonymous
3a150bad15 Only calculate randn in some samplers when it's actually being used. 2023-07-17 10:11:08 -04:00
comfyanonymous
ee8f8ee07f Fix regression with ddim and uni_pc when batch size > 1. 2023-07-17 09:35:19 -04:00
comfyanonymous
3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2023-07-17 01:22:12 -04:00
comfyanonymous
5f57362613 Lower lora ram usage when in normal vram mode. 2023-07-16 02:59:04 -04:00
comfyanonymous
490771b7f4 Speed up lora loading a bit. 2023-07-15 13:25:22 -04:00
comfyanonymous
50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2023-07-15 01:41:21 -04:00
comfyanonymous
6fb084f39d Reduce floating point rounding errors in loras. 2023-07-15 00:53:00 -04:00
comfyanonymous
91ed2815d5 Add a node to merge CLIP models. 2023-07-14 02:41:18 -04:00
comfyanonymous
b2f03164c7 Prevent the clip_g position_ids key from being saved in the checkpoint.
This is to make it match the official checkpoint.
2023-07-12 20:15:02 -04:00
comfyanonymous
46dc050c9f Fix potential tensors being on different devices issues. 2023-07-12 19:29:27 -04:00
KarryCharon
3e2309f149 fix mps miss import 2023-07-12 10:06:34 +08:00
comfyanonymous
606a537090 Support SDXL embedding format with 2 CLIP. 2023-07-10 10:34:59 -04:00
comfyanonymous
6ad0a6d7e2 Don't patch weights when multiplier is zero. 2023-07-09 17:46:56 -04:00
comfyanonymous
d5323d16e0 latent2rgb matrix for SDXL. 2023-07-09 13:59:09 -04:00
comfyanonymous
0ae81c03bb Empty cache after model unloading for normal vram and lower. 2023-07-09 09:56:03 -04:00
comfyanonymous
d3f5998218 Support loading clip_g from diffusers in CLIP Loader nodes. 2023-07-09 09:33:53 -04:00
comfyanonymous
a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. 2023-07-08 22:31:10 -04:00
comfyanonymous
bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI 2023-07-07 01:52:25 -04:00
comfyanonymous
e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 2023-07-06 23:23:46 -04:00
comfyanonymous
608fcc2591 Fix bug with weights when prompt is long. 2023-07-06 02:43:40 -04:00
comfyanonymous
ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous
603f02d613 Fix loras not working when loading checkpoint with config. 2023-07-05 19:42:24 -04:00
comfyanonymous
af7a49916b Support loading unet files in diffusers format. 2023-07-05 17:38:59 -04:00
comfyanonymous
e57cba4c61 Add gpu variations of the sde samplers that are less deterministic
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous
f81b192944 Add logit scale parameter so it's present when saving the checkpoint. 2023-07-04 23:01:28 -04:00
comfyanonymous
acf95191ff Properly support SDXL diffusers loras for unet. 2023-07-04 21:15:23 -04:00
mara
c61a95f9f7 Fix size check for conditioning mask
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous
8d694cc450 Fix issue with OSX. 2023-07-04 02:09:02 -04:00
comfyanonymous
c3e96e637d Pass device to CLIP model. 2023-07-03 16:09:37 -04:00
comfyanonymous
5e6bc824aa Allow passing custom path to clip-g and clip-h. 2023-07-03 15:45:04 -04:00
comfyanonymous
dc9d1f31c8 Improvements for OSX. 2023-07-03 00:08:30 -04:00
comfyanonymous
103c487a89 Cleanup. 2023-07-02 11:58:23 -04:00
comfyanonymous
2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 2023-07-02 10:00:57 -04:00
comfyanonymous
6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 2023-07-01 22:42:35 -04:00
comfyanonymous
1c1b0e7299 --gpu-only now keeps the VAE on the device. 2023-07-01 15:22:40 -04:00
comfyanonymous
ce35d8c659 Lower latency by batching some text encoder inputs. 2023-07-01 15:07:39 -04:00
comfyanonymous
3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 2023-07-01 14:38:51 -04:00
comfyanonymous
b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
5a9ddf94eb LoraLoader node now caches the lora file between executions. 2023-06-29 23:40:51 -04:00
comfyanonymous
9920367d3c Fix embeddings not working with --gpu-only 2023-06-29 20:43:06 -04:00
comfyanonymous
62db11683b Move unet to device right after loading on highvram mode. 2023-06-29 20:43:06 -04:00
comfyanonymous
4376b125eb Remove useless code. 2023-06-29 00:26:33 -04:00
comfyanonymous
89120f1fbe This is unused but it should be 1280. 2023-06-28 18:04:23 -04:00
comfyanonymous
2c7c14de56 Support for SDXL text encoder lora. 2023-06-28 02:22:49 -04:00
comfyanonymous
fcef47f06e Fix bug. 2023-06-28 00:38:07 -04:00
comfyanonymous
8248babd44 Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous
9b93b920be Add CheckpointSave node to save checkpoints.
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.

Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32

Anything that patches the model weights like merging or loras will be
saved.

The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
b72a7a835a Support loras based on the stability unet implementation. 2023-06-26 02:56:11 -04:00
comfyanonymous
c71a7e6b20 Fix ddim + inpainting not working. 2023-06-26 00:48:48 -04:00
comfyanonymous
4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 2023-06-25 03:04:57 -04:00
comfyanonymous
cef6aa62b2 Add support for TAESD decoder for SDXL. 2023-06-25 02:38:14 -04:00
comfyanonymous
20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
b7933960bb Fix CLIPLoader node. 2023-06-24 13:56:46 -04:00
comfyanonymous
78d8035f73 Fix bug with controlnet. 2023-06-24 11:02:38 -04:00
comfyanonymous
05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
fa28d7334b Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous
8607c2d42d Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00
comfyanonymous
30a3861946 Fix bug when yaml config has no clip params. 2023-06-23 01:12:59 -04:00
comfyanonymous
9e37f4c7d5 Fix error with ClipVision loader node. 2023-06-23 01:08:05 -04:00
comfyanonymous
9f83b098c9 Don't merge weights when shapes don't match and print a warning. 2023-06-22 19:08:31 -04:00
comfyanonymous
f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 2023-06-21 13:22:01 -04:00
comfyanonymous
51581dbfa9 Fix last commits causing an issue with the text encoder lora. 2023-06-20 19:44:39 -04:00
comfyanonymous
8125b51a62 Keep a set of model_keys for faster add_patches. 2023-06-20 19:08:48 -04:00
comfyanonymous
45beebd33c Add a type of model patch useful for model merging. 2023-06-20 17:34:11 -04:00
comfyanonymous
036a22077c Fix k_diffusion math being off by a tiny bit during txt2img. 2023-06-19 15:28:54 -04:00
comfyanonymous
8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
cd930d4e7f pop clip vision keys after loading them. 2023-06-18 21:21:17 -04:00
comfyanonymous
c9e4a8c9e5 Not needed anymore. 2023-06-18 13:06:59 -04:00
comfyanonymous
fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 2023-06-18 03:18:25 -04:00
comfyanonymous
45be2e92c1 Fix DDIM v-prediction. 2023-06-17 20:48:21 -04:00
comfyanonymous
e6e50ab2dd Fix an issue when alphas_comprod are half floats. 2023-06-16 17:16:51 -04:00
comfyanonymous
ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2023-06-15 18:42:30 -04:00
comfyanonymous
f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
7bf89ba923 Initialize more unet weights as the right dtype. 2023-06-15 15:00:10 -04:00
comfyanonymous
e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2023-06-15 14:29:26 -04:00
comfyanonymous
bb1f45d6e8 Properly disable weight initialization in clip models. 2023-06-14 20:13:08 -04:00
comfyanonymous
21f04fe632 Disable default weight values in unet conv2d for faster loading. 2023-06-14 19:46:08 -04:00
comfyanonymous
9d54066ebc This isn't needed for inference. 2023-06-14 13:05:08 -04:00
comfyanonymous
fa2cca056c Don't initialize CLIPVision weights to default values. 2023-06-14 12:57:02 -04:00
comfyanonymous
6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 2023-06-14 12:48:02 -04:00
comfyanonymous
0c7cad404c Don't initialize clip weights to default values. 2023-06-14 12:47:36 -04:00
comfyanonymous
6971646b8b Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous
388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous
ff9b22d79e Turn on safe load for a few models. 2023-06-13 10:12:03 -04:00
comfyanonymous
735ac4cf81 Remove pytorch_lightning dependency. 2023-06-13 10:11:33 -04:00
comfyanonymous
2b14041d4b Remove useless code. 2023-06-13 02:40:58 -04:00
comfyanonymous
274dff3257 Remove more useless files. 2023-06-13 02:22:19 -04:00
comfyanonymous
f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2023-06-13 02:19:08 -04:00
comfyanonymous
f8c5931053 Split the batch in VAEEncode if there's not enough memory. 2023-06-12 00:21:50 -04:00
comfyanonymous
c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 2023-06-11 23:25:39 -04:00
comfyanonymous
c64ca8c0b2 Refactor unCLIP noise augment out of samplers.py 2023-06-11 04:01:18 -04:00
comfyanonymous
de142eaad5 Simpler base model code. 2023-06-09 12:31:16 -04:00
comfyanonymous
23cf8ca7c5 Fix bug when embedding gets ignored because of mismatched size. 2023-06-08 23:48:14 -04:00
comfyanonymous
0e425603fb Small refactor. 2023-06-06 13:23:01 -04:00
comfyanonymous
a3a713b6c5 Refactor previews into one command line argument.
Clean up a few things.
2023-06-06 02:13:05 -04:00
space-nuko
3e17971acb preview method autodetection 2023-06-05 18:59:10 -05:00
space-nuko
d5a28fadaa Add latent2rgb preview 2023-06-05 18:39:56 -05:00
space-nuko
48f7ec750c Make previews into cli option 2023-06-05 13:19:02 -05:00
space-nuko
b4f434ee66 Preview sampled images with TAESD 2023-06-05 09:20:17 -05:00
comfyanonymous
fed0a4dd29 Some comments to say what the vram state options mean. 2023-06-04 17:51:04 -04:00
comfyanonymous
0a5fefd621 Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
2023-06-03 11:05:37 -04:00
comfyanonymous
700491d81a Implement global average pooling for controlnet. 2023-06-03 01:49:03 -04:00
comfyanonymous
67892b5ac5 Refactor and improve model_management code related to free memory. 2023-06-02 15:21:33 -04:00
space-nuko
499641ebf1 More accurate total 2023-06-02 00:14:41 -05:00
space-nuko
b5dd15c67a System stats endpoint 2023-06-01 23:26:23 -05:00
comfyanonymous
5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 2023-06-01 04:04:35 -04:00
comfyanonymous
94680732d3 Empty cache on mps. 2023-06-01 03:52:51 -04:00