mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-07-03 13:57:09 +08:00

* Add --use-flash-attention flag. This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
* Add --use-flash-attention flag. This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.