Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cross_map_replace_steps not used?] #12

Open
tomguluson92 opened this issue Nov 23, 2023 · 3 comments
Open

[cross_map_replace_steps not used?] #12

tomguluson92 opened this issue Nov 23, 2023 · 3 comments

Comments

@tomguluson92
Copy link

tomguluson92 commented Nov 23, 2023

Dear authors,

Thanks for your brilliant work! I am wondering that when I dive into the swapping_class.py https://github.com/eric-ai-lab/photoswap/blob/main/swapping_class.py#L179 for understanding the
usage of

  • cross_map_replace_steps
  • self_output_replace_steps
  • self_map_replace_steps

I didn't find the usage of these parameters in either self.replace_cross_attention(xxx) or self.replace_self_attention(xxx).

And I also want to know that the usage of self.local_blend, I didn't find it has been used in the code anywhere.

Could you please tell me the meaning of these settings and how it actually influence the generated result?

Thanks.

  • source image

image

justin biber checkpoint

  • cross_map_replace_steps = 0.6 self_output_replace_steps = 0.8

image

  • cross_map_replace_steps = 0.9 self_output_replace_steps = 0.6

image

@g-jing
Copy link
Collaborator

g-jing commented Nov 23, 2023

cross_map_replace_steps and self_output_replace_steps are two parameters of controller, which will be used in this function:

def register_attention_control(model, controller):
.
You should be able to reason out if you follow the tracing of the controller.
self.local_blend is used for whether the some of latent image at each diffusion step of the generated image directly got swapped from the latent of the source image, so that the background pixel could be directly borrowed from the original image.

@tomguluson92
Copy link
Author

tomguluson92 commented Nov 23, 2023

Thanks for quick reply:

But I still don't find anything that inside swapping_class.py used cross_map_replace_steps and self_output_replace_steps.

The only used member functions are replace_cross_attention & replace_self_attention

class AttentionSwap(AttentionControlEdit):

I have also check multiple times the code snippet in utils.py.

Nothing called in self.replace_cross_attention(xxx) and self.replace_self_attention(xxx). local_blend is not used during inference either.

@nancy6o6
Copy link

nancy6o6 commented Aug 15, 2024

I think the cross-attention replacing threshold is actually defined by:

self.cross_replace_alpha = utils.get_time_words_attention_alpha(prompts, num_steps, cross_replace_steps, tokenizer).to(device)

which takes cross_replace_steps as the parameter. self.cross_replace_alpha plays a role in

photoswap/swapping_class.py

Lines 136 to 139 in 570ca0d

if is_cross:
alpha_words = self.cross_replace_alpha[self.cur_step]
attn_repalce_new = self.replace_cross_attention(attn_base, attn_repalce) * alpha_words + (1 - alpha_words) * attn_repalce
attn[1:] = attn_repalce_new

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants