Sept. 22, 2019, midnight
By : Eric A. Scuccimarra
I started working on a variational auto-encoder (VAE) for faces a few months ago. I was easily able to make a non-variational autoencoder to reproduce images that worked incredibly well, but since it was not variational there wasn't much you could do with it other than compress images. I wanted to be able to play with interpolation and such, and for that you need a VAE. So I converted my auto-encoder to a variational one, but the problem was that the resulting images were very blurry and the quality wasn't all that great. So I thought maybe I could attach a GAN to this to make the images look more realistic. And I tried that but unfortunately it didn't work very well, the GAN was trying to produce to generate images of what it though were faces will the autoencoder was trying to reproduce its input, as seen in the images below:
After fighting with this for a few months I decided to try to make sure that the GAN was working properly before I added on the autoencoder, and although I had to fight with the GAN quite a bit and was never able to get it to generate really high quality images, I was sure that it was working properly. So I decided to try to hook it up to the autoencoder again.
Then I discovered this paper Autoencoding beyond pixels using a learned similarity metric, which does the same thing I was trying to do but in a much smarter way. What I had been doing was using the MSE between the input and the generated images for my VAE loss, and training both the encoder and the decoder with the GAN loss. Obviously this did not work.
What they do in the paper is basically separate the encoder and leave the decoder and discriminator as the GAN, which is trained as usual. I had tried to think of ways to train the encoder and decoder separately, but my ideas were much more primitive and didn't work at all. What they do that is train the encoder separately, using the KLD loss and - this is the brilliant part - instead of using MSE between the input and the recreation they use the MSE between a feature map from an intermediate layer of the discriminator for the real and faked images. So rather than trying to produce an exact duplicate of the input, the encoder is trying to produce something that the discriminator thinks is close to the input.
It took me a few hours to rewrite my code to make use of this new loss, and come up with a version that would be able to run without having to keep all of the graphs in memory and be able to train in a reasonable amount of time, and I think everything is finally working. Hopefully this works better than my previous attempts, and next time I will try to remember to review the literature before trying to implement a new idea on my own.
Labels: pytorch , autoencoders , gan
[url=https://anticancer24.ru/shop/442/desc/spegra-doluteglavir-tenofovir-alafenamid-ehmtricitabin]Spegra (Долутеглавир / Тенофовир алафенамид / Эмтрицитабин) - Спегра (Dolutegravir/ Taf / Emtricitabine)[/url] – это лекарство для лечения людей, которые столкнулись с ВИЧ-1 инфекцией. Симптомы заболевания быстро исчезнут, и уже через некоторое время Ваше общее самочувствие значительно улучшится: Вы забудете о том, что такое боли и дискомфорт. Благодаря данному лекарству можно исключить риск летального исхода. Вы можете [url=https://anticancer24.ru/shop/442/desc/spegra-doluteglavir-tenofovir-alafenamid-ehmtricitabin]Spegra (Долутеглавир / Тенофовир алафенамид / Эмтрицитабин) - Спегра (Dolutegravir/ Taf / Emtricitabine) купить[/url] в нашей аптеке на выгодных условиях. Специалисты говорят о том, что перед началом лечения необходимо обязательно обратиться в больницу. Вы сдадите назначенные анализы и пройдете диагностику организма. Доктора разработают для Вас индивидуальную схему терапии, которая даст возможность получить максимальный эффект.