Better memory estimation for ROCm that support mem efficient attention.

There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
This commit is contained in:
comfyanonymous 2025-02-13 08:32:36 -05:00
parent 1d5d6586f3
commit 8773ccf74d

View File

@ -909,6 +909,8 @@ def pytorch_attention_flash_attention():
return True
if is_ascend_npu():
return True
if is_amd():
return True #if you have pytorch attention enabled on AMD it probably supports at least mem efficient attention
return False
def mac_version():