If I use the function '. enable_xformers_cemory_efficient-attention()' on the model, it always reports the following problem:
You are removing possibly trained weights of AttnProcessor2_0() with <diffusers.models.attention_processor.XFormersAttnProcessor object at 0x7fb624b2ebb0> You are removing possibly trained weights of IPAttnProcessor2_0( (to_k_ip): Linear(in_features=2048, out_features=640, bias=False) (to_v_ip): Linear(in_features=2048, out_features=640, bias=False) ) with <diffusers.models.attention_processor. XFormersAttnProcessor object at 0x7fb624b2e850>