Closed
Description
Describe the bug
Is it possible to get back the attention_mask
argument in the flux attention processor
hidden_states = F.scaled_dot_product_attention(query, key, value, dropout_p=0.0, is_causal=False,attn_mask=attention_mask)
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1910
in order to tweak things a bit ? otherwise the argument attention_mask
is unused.
Thanks a lot
Reproduction
pip install diffusers
Logs
No response
System Info
Ubuntu