You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, what an excellent job you have done from gan dissection to gan painting and gan rewritting, i have carefully read the paper "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks" and the paper "Semantic Photo Manipulation with a Generative Image Prior".
In particular, i have doubts about how to intervention the middle layer in gan to draw/remove the user-specified semantic concepts in the **user-marked area** in output image, for example , in gan-painting(https://ganpaint.io/), we could choose some marked region and use the gan dissection to find the units in the middle layer associated with most matched concept , then choose some way to edit the units in the middle layer like insert value k in the units
The question is hou to ensure that **the position of the middle layer editor will correspond exactly to the position that calibrated by the user in the output image**。gan dissection , in my opinion , would find the agreement between the concept in the output image and the unit(feature map) in the middle layer, wheras , it would not ensure the concept just appears in the are drawn by the user.
It is clear to understand the match between the concept and the unit, but i have doubts about the match between the location area of the concept and the editing area in the unit. would you mind giving me more details on this issue ?
The text was updated successfully, but these errors were encountered:
It is clear to understand the match between the concept and the unit using gan dissection, but i have doubts about the match between the location area of the concept and the editing area in the unit
The text was updated successfully, but these errors were encountered: