Can someone tell me the intuition of applying an optimizer (descending gradient, adam) to the latent code.
the optimizer looks for the image in the latent space, the latent space is updated instead of the weights of a neural network.
Why does it work in this case, since it is not the weights of a neural network that is updated?
How does the optimizer know the latent code that represents the input image for the generator?
Can someone tell me the intuition of applying an optimizer (descending gradient, adam) to the latent code.
the optimizer looks for the image in the latent space, the latent space is updated instead of the weights of a neural network.
Why does it work in this case, since it is not the weights of a neural network that is updated?
How does the optimizer know the latent code that represents the input image for the generator?