mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-01-11 10:25:16 +00:00
47acb3d73e
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode. Put the clip vision model in models/clip_vision Put the t2i style model in models/style_models StyleModelLoader to load it, StyleModelApply to apply it ConditioningAppend to append the conditioning it outputs to a positive one. |
||
---|---|---|
.. | ||
cldm | ||
extra_samplers | ||
k_diffusion | ||
ldm | ||
sd1_tokenizer | ||
t2i_adapter | ||
model_management.py | ||
samplers.py | ||
sd1_clip_config.json | ||
sd1_clip.py | ||
sd2_clip_config.json | ||
sd2_clip.py | ||
sd.py | ||
utils.py |