Commit Graph

878 Commits

Author SHA1 Message Date
comfyanonymous
9f71e4b62d Let model patches patch sub objects. 2024-03-02 11:43:27 -05:00
comfyanonymous
00425563c0 Cleanup: Use sampling noise scaling function for inpainting. 2024-03-01 14:24:41 -05:00
comfyanonymous
c62e836167 Move noise scaling to object with sampling math. 2024-03-01 12:54:38 -05:00
comfyanonymous
cb7c3a2921 Allow image_only_indicator to be None. 2024-02-29 13:11:30 -05:00
comfyanonymous
b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
37a86e4618 Remove duplicate text_projection key from some saved models. 2024-02-28 03:57:41 -05:00
comfyanonymous
8daedc5bf2 Auto detect playground v2.5 model. 2024-02-27 18:03:03 -05:00
comfyanonymous
d46583ecec Playground V2.5 support with ModelSamplingContinuousEDM node.
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
2024-02-27 15:12:33 -05:00
comfyanonymous
1e0fcc9a65 Make XL checkpoints save in a more standard format. 2024-02-27 02:07:40 -05:00
comfyanonymous
b416be7d78 Make the text projection saved in the checkpoint the right format. 2024-02-27 01:52:23 -05:00
comfyanonymous
03c47fc0f2 Add a min_length property to tokenizer class. 2024-02-26 21:36:37 -05:00
comfyanonymous
8ac69f62e5 Make return_projected_pooled setable from the __init__ 2024-02-25 14:49:13 -05:00
comfyanonymous
ca7c310a0e Support loading old CLIP models saved with CLIPSave. 2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b Always return unprojected pooled output for gligen. 2024-02-25 07:33:13 -05:00
comfyanonymous
1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
6533b172c1 Support text encoder text_projection in lora. 2024-02-24 23:50:46 -05:00
comfyanonymous
1e5f0f66be Support lora keys with lora_prior_unet_ and lora_prior_te_ 2024-02-23 12:21:20 -05:00
logtd
e1cb93c383 Fix model and cond transformer options merge 2024-02-23 01:19:43 -07:00
comfyanonymous
10847dfafe Cleanup uni_pc inpainting.
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
2024-02-23 02:39:35 -05:00
comfyanonymous
18c151b3e3 Add some latent2rgb matrices for previews. 2024-02-20 10:57:24 -05:00
comfyanonymous
0d0fbabd1d Pass pooled CLIP to stage b. 2024-02-20 04:24:45 -05:00
comfyanonymous
c6b7a157ed Align simple scheduling closer to official stable cascade scheduler. 2024-02-20 04:24:39 -05:00
comfyanonymous
88f300401c Enable fp16 by default on mps. 2024-02-19 12:00:48 -05:00
comfyanonymous
e93cdd0ad0 Remove print. 2024-02-19 11:47:26 -05:00
comfyanonymous
3711b31dff Support Stable Cascade in checkpoint format. 2024-02-19 11:20:48 -05:00
comfyanonymous
d91f45ef28 Some cleanups to how the text encoders are loaded. 2024-02-19 10:46:30 -05:00
comfyanonymous
a7b5eaa7e3 Forgot to commit this. 2024-02-19 04:25:46 -05:00
comfyanonymous
3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
dccca1daa5 Fix gligen lowvram mode. 2024-02-18 02:20:23 -05:00
comfyanonymous
8b60d33bb7 Add ModelSamplingStableCascade to control the shift sampling parameter.
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
2024-02-18 00:55:23 -05:00
comfyanonymous
6bcf57ff10 Fix attention masks properly for multiple batches. 2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f fp8 weight support for Stable Cascade. 2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3 Fix attention mask batch size in some attention functions. 2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5 Properly fix attention masks in CLIP with batches. 2024-02-17 12:13:13 -05:00
comfyanonymous
5b40e7a5ed Implement shift schedule for cascade stage C. 2024-02-17 11:38:47 -05:00
comfyanonymous
929e266f3e Manual cast for bf16 on older GPUs. 2024-02-17 09:01:17 -05:00
comfyanonymous
6c875d846b Fix clip attention mask issues on some hardware. 2024-02-17 07:53:52 -05:00
comfyanonymous
805c36ac9c Make Stable Cascade work on old pytorch 2.0 2024-02-17 00:42:30 -05:00
comfyanonymous
f2d1d16f4f Support Stable Cascade Stage B lite. 2024-02-16 23:41:23 -05:00
comfyanonymous
0b3c50480c Make --force-fp32 disable loading models in bf16. 2024-02-16 23:01:54 -05:00
comfyanonymous
97d03ae04a StableCascade CLIP model support. 2024-02-16 13:29:04 -05:00
comfyanonymous
667c92814e Stable Cascade Stage B. 2024-02-16 13:02:03 -05:00
comfyanonymous
f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
comfyanonymous
5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
comfyanonymous
aeaeca10bd Small refactor of is_device_* functions. 2024-02-15 21:10:10 -05:00
comfyanonymous
38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 2024-02-13 00:01:08 -05:00
comfyanonymous
7dd352cbd7 Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI 2024-02-11 12:23:30 -05:00
Jedrzej Kosinski
f44225fd5f Fix infinite while loop being possible in ddim_scheduler 2024-02-09 17:11:34 -06:00
comfyanonymous
25a4805e51 Add a way to set different conditioning for the controlnet. 2024-02-09 14:13:31 -05:00
blepping
a352c021ec Allow custom samplers to request discard penultimate sigma 2024-02-08 02:24:23 -07:00
comfyanonymous
c661a8b118 Don't use numpy for calculating sigmas. 2024-02-07 18:52:51 -05:00
comfyanonymous
236bda2683 Make minimum tile size the size of the overlap. 2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c Don't use is_bf16_supported to check for fp16 support. 2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d Always use fp16 for the text encoders. 2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2 Put VAE key name in model config. 2024-01-30 02:24:38 -05:00
comfyanonymous
89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler (#2619)
self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87 Remove useless import. 2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c Make server storage the default.
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6 Don't round noise mask. 2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30 Fix issue when using multiple t2i adapters with batched images. 2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support (#2160)
* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
comfyanonymous
8c6493578b Implement noise augmentation for SD 4X upscale model. 2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb Fix model patches not working in custom sampling scheduler nodes. 2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b Fix regression. 2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90 Auto detect out_channels from model. 2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33 Remove useless code. 2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2 Add argument to run the VAE on the CPU. 2023-12-30 05:49:07 -05:00
comfyanonymous
12e822c6c8 Use function to calculate model size in model patcher. 2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69 Load weights that can't be lowvramed to target device. 2023-12-28 21:41:10 -05:00
comfyanonymous
c782144433 Fix clip vision lowvram mode not working. 2023-12-27 13:50:57 -05:00
comfyanonymous
f21bb41787 Fix taesd VAE in lowvram mode. 2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a Fix SVD lowvram mode. 2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95 --disable-smart-memory now unloads everything like it did originally. 2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9 A few missing comfy ops in the VAE. 2023-12-22 04:05:42 -05:00
comfyanonymous
9a7619b72d Fix regression with inpaint model. 2023-12-19 02:32:59 -05:00
comfyanonymous
571ea8cdcc Fix SAG not working with cfg 1.0 2023-12-18 17:03:32 -05:00
comfyanonymous
8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 2023-12-18 12:54:23 -05:00
comfyanonymous
2258f85159 Support stable zero 123 model.
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
2023-12-18 03:48:04 -05:00
comfyanonymous
2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 2023-12-17 16:59:21 -05:00
comfyanonymous
e45d920ae3 Don't resize clip vision image when the size is already good. 2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e Switch clip vision to manual cast.
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f Set clip vision model in eval mode so it works without inference mode. 2023-12-15 18:53:08 -05:00
Hari
574363a8a6 Implement Perp-Neg 2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993 Improve code legibility. 2023-12-14 11:41:49 -05:00
comfyanonymous
6c5990f7db Fix cfg being calculated more than once if sampler_cfg_function. 2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10 Refactor and improve the sag node.
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance (#2201)
* First SAG test

* need to put extra options on the model instead of patcher

* no errors and results seem not-broken

* Use @ashen-uncensored formula, which works better!!!

* Fix a crash when using weird resolutions. Remove an unnecessary UNet call

* Improve comments, optimize memory in blur routine

* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9 Support segmind vega model. 2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5 Add dtype parameter to VAE object. 2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769 Add manual cast to controlnet. 2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc Use inference dtype for unet memory usage estimation. 2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e Use faster manual cast for fp8 in unet. 2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8 Disable non blocking on mps. 2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f Implement GLora. 2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4 Make lora code a bit cleaner. 2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95 Use own clip vision model implementation. 2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38 Cleanup. 2023-12-08 16:02:08 -05:00
comfyanonymous
a4ec54a40d Add linear_start and linear_end to model_config.sampling_settings 2023-12-08 02:49:30 -05:00
comfyanonymous
9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 2023-12-08 02:35:45 -05:00
comfyanonymous
efb704c758 Support attention masking in CLIP implementation. 2023-12-07 02:51:02 -05:00
comfyanonymous
fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
2db86b4676 Slightly faster lora applying. 2023-12-06 05:13:14 -05:00
comfyanonymous
1bbd65ab30 Missed this one. 2023-12-05 12:48:41 -05:00
comfyanonymous
9b655d4fd7 Fix memory issue with control loras. 2023-12-04 21:55:19 -05:00
comfyanonymous
26b1c0a771 Fix control lora on fp8. 2023-12-04 13:47:41 -05:00
comfyanonymous
be3468ddd5 Less useless downcasting. 2023-12-04 12:53:46 -05:00
comfyanonymous
ca82ade765 Use .itemsize to get dtype size for fp8. 2023-12-04 11:52:06 -05:00
comfyanonymous
31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1 All the unet ops with weights are now handled by comfy.ops 2023-12-04 03:12:18 -05:00
comfyanonymous
61a123a1e0 A different way of handling multiple images passed to SVD.
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]

now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
c97be4db91 Support SD2.1 turbo checkpoint. 2023-11-30 19:27:03 -05:00
comfyanonymous
983ebc5792 Use smart model management for VAE to decrease latency. 2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67 Add a function to load a unet from a state dict. 2023-11-27 17:41:29 -05:00
comfyanonymous
f30b992b18 .sigma and .timestep now return tensors on the same device as the input. 2023-11-27 16:41:33 -05:00
comfyanonymous
13fdee6abf Try to free memory for both cond+uncond before inference. 2023-11-27 14:55:40 -05:00
comfyanonymous
be71bb5e13 Tweak memory inference calculations a bit. 2023-11-27 14:04:16 -05:00
comfyanonymous
39e75862b2 Fix regression from last commit. 2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
5d6dfce548 Fix importing diffusers unets. 2023-11-24 20:35:29 -05:00
comfyanonymous
3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771 Make VAE memory estimation take dtype into account. 2023-11-22 18:17:19 -05:00
comfyanonymous
32447f0c39 Add sampling_settings so models can specify specific sampling settings. 2023-11-22 17:24:00 -05:00
comfyanonymous
c3ae99a749 Allow controlling downscale and upscale methods in PatchModelAddDownscale. 2023-11-22 03:23:16 -05:00
comfyanonymous
72741105a6 Remove useless code. 2023-11-21 17:27:28 -05:00
comfyanonymous
6a491ebe27 Allow model config to preprocess the vae state dict on load. 2023-11-21 16:29:18 -05:00
comfyanonymous
cd4fc77d5f Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
ce67dcbcda Make it easy for models to process the unet state dict on load. 2023-11-20 23:17:53 -05:00
comfyanonymous
d9d8702d8d percent_to_sigma now returns a float instead of a tensor. 2023-11-18 23:20:29 -05:00
comfyanonymous
0cf4e86939 Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
107e78b1cb Add support for loading SSD1B diffusers unet version.
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7e3fe3ad28 Make deep shrink behave like it should. 2023-11-16 15:26:28 -05:00
comfyanonymous
9f00a18095 Fix potential issues. 2023-11-16 14:59:54 -05:00
comfyanonymous
7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 2023-11-16 12:57:12 -05:00
comfyanonymous
dcec1047e6 Invert the start and end percentages in the code.
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.

percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.

Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
57eea0efbb heunpp2 sampler. 2023-11-14 23:50:55 -05:00
comfyanonymous
728613bb3e Fix last pr. 2023-11-14 14:41:31 -05:00
comfyanonymous
ec3d0ab432 Merge branch 'master' of https://github.com/Jannchie/ComfyUI 2023-11-14 14:38:07 -05:00
comfyanonymous
c962884a5c Make bislerp work on GPU. 2023-11-14 11:38:36 -05:00
comfyanonymous
420beeeb05 Clean up and refactor sampler code.
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
f2e49b1d57 fix: adaptation to older versions of pytroch 2023-11-14 14:32:05 +09:00
comfyanonymous
94cc718e9c Add a way to add patches to the input block. 2023-11-14 00:08:12 -05:00
comfyanonymous
7339479b10 Disable xformers when it can't load properly. 2023-11-13 12:31:10 -05:00
comfyanonymous
4781819a85 Make memory estimation aware of model dtype. 2023-11-12 04:28:26 -05:00
comfyanonymous
dd4ba68b6e Allow different models to estimate memory usage differently. 2023-11-12 04:03:52 -05:00
comfyanonymous
2c9dba8dc0 sampling_function now has the model object as the argument. 2023-11-12 03:45:10 -05:00
comfyanonymous
8d80584f6a Remove useless argument from uni_pc sampler. 2023-11-12 01:25:33 -05:00
comfyanonymous
248aa3e563 Fix bug. 2023-11-11 12:20:16 -05:00
comfyanonymous
4a8a839b40 Add option to use in place weight updating in ModelPatcher. 2023-11-11 01:11:12 -05:00
comfyanonymous
412d3ff57d Refactor. 2023-11-11 01:11:06 -05:00
comfyanonymous
58d5d71a93 Working RescaleCFG node.
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous
3e0033ef30 Fix model merge bug.
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous
002aefa382 Support lcm models.
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous
ec12000136 Add support for full diff lora keys. 2023-11-08 22:05:31 -05:00
comfyanonymous
064d7583eb Add a CONDConstant for passing non tensor conds to unet. 2023-11-08 01:59:09 -05:00
comfyanonymous
794dd2064d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous
0a6fd49a3e Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous
fe40109b57 Fix issue with object patches not being copied with patcher. 2023-11-07 22:15:15 -05:00
comfyanonymous
a527d0c795 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous
844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 2023-11-06 01:09:18 -05:00
gameltb
7e455adc07 fix unet_wrapper_function name in ModelPatcher 2023-11-05 17:11:44 +08:00
comfyanonymous
1ffa8858e7 Move model sampling code to comfy/model_sampling.py 2023-11-04 01:32:23 -04:00
comfyanonymous
ae2acfc21b Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
2455aaed8a Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous
ecb80abb58 Allow ModelSamplingDiscrete to be instantiated without a model config. 2023-11-01 19:13:03 -04:00
comfyanonymous
e73ec8c4da Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous
111f1b5255 Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous
7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous
a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
1777b54d02 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
c837a173fa Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00
comfyanonymous
a12cc05323 Add --max-upload-size argument, the default is 100MB. 2023-10-29 03:55:46 -04:00
comfyanonymous
2a134bfab9 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 2023-10-27 15:54:04 -04:00
comfyanonymous
6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00