ComfyUI/comfy/ldm/flux
Raphael Walker 61b50720d0
Add support for attention masking in Flux (#5942)
* fix attention OOM in xformers

* allow passing attention mask in flux attention

* allow an attn_mask in flux

* attn masks can be done using replace patches instead of a separate dict

* fix return types

* fix return order

* enumerate

* patch the right keys

* arg names

* fix a silly bug

* fix xformers masks

* replace match with if, elif, else

* mask with image_ref_size

* remove unused import

* remove unused import 2

* fix pytorch/xformers attention

This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.

* fix mask shapes
2024-12-16 18:21:17 -05:00
..
controlnet.py Lint unused import (#5973) 2024-12-09 15:24:39 -05:00
layers.py Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
math.py Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
model.py Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
redux.py Support new flux model variants. 2024-11-21 08:38:23 -05:00