Javascript and React

Feb. 15, 2020, 8:06 a.m.

In university I was a TA for what they called "third stream computing," which was basically simple computer programming for non-CS people. We covered things like Hypercard, HTML and Javascript, which at the time was limited to things like showing alerts and validating and submitting forms. I think that the idea of Javascript as a very simple and not very powerful language has stuck with me through the years, even while Javascript has been maturing and advancing enormously.

A few weeks ago I decided one morning to spend an hour going through a ReactJS tutorial, because I keep hearing so much about it. After about half an hour I stopped the tutorial and starting rewriting something I was working on in React. Since then I've been doing any web-related work in React and re-writing other web stuff I've previously done in React.

Javascript frameworks like React are going to completely change web development, instead of the back-end serving HTML the back-end will now serve JSON through APIs and the front-ends will probably mostly be Javascript. And in my opinion that's a much better and more efficient way to create interactive websites. Not only is it faster to load and render data on the front-end, but it's much cleaner in terms of code and separating functionality. 

Labels: javascript , react

3 comments

VAE GAN

Dec. 8, 2019, 11:09 a.m.

I had been trying to train a version of VAE-GAN for a few weeks and it wasn't working as well as I had hoped it would. I had added an auxiliary output to the discriminator which was attempting to predict the 40 features of each image provided with the celeb-a dataset as suggested in the VAE-GAN paper and I was scaling that loss to try to bring it in line with the GAN discriminator loss, but I was doing that incorrectly so that loss ended up overwhelming the GAN loss. (I was summing, rather than averaging the losses, and the lambda I was using to scale the loss was appropriate for a mean loss, but with 40 features the auxiliary loss was 40x the GAN loss at base, so I needed to divide the lambda by 40 to get the effect I wanted.)

After having corrected that error I am finally making some progress with these models. Below are sample images from two models I am training. The first outputs images at 160x160, the second at 128x128.

I guess the moral of this story is if something isn't working the way you expect it to, double check your math before you continue training it!

Labels: python , machine_learning , pytorch , gan

3 comments

Eigenvectors from Eigenvalues

Nov. 24, 2019, 12:07 p.m.

This paper was released over the summer which describes a newly discovered method for obtaining eigenvectors from eigenvalues. While this method only works for Hermitian matrices, previous methods for computing eigenvectors were far more complicated and costly. While relatively, easy, it can be quite costly to determine the dominant eigenvector of a matrix, and this process had to be repeated after removing the dominant eigenvector of the matrix in order to compute additional eigenvectors.

This new method shows that there is a straightforward relationship between the normed squared eigenvalues of a matrix, the eigenvalues of submatrices, and the eigenvectors. I can't stress enough how amazing this is. This will require that all linear algebra textbooks be revised.

I have a numpy implementation of this new method available here.

Labels: python , linear_algebra

1 comment

VAE GAN

Sept. 22, 2019, 1:34 p.m.

I started working on a variational auto-encoder (VAE) for faces a few months ago. I was easily able to make a non-variational autoencoder to reproduce images that worked incredibly well, but since it was not variational there wasn't much you could do with it other than compress images. I wanted to be able to play with interpolation and such, and for that you need a VAE. So I converted my auto-encoder to a variational one, but the problem was that the resulting images were very blurry and the quality wasn't all that great. So I thought maybe I could attach a GAN to this to make the images look more realistic. And I tried that but unfortunately it didn't work very well, the GAN was trying to produce to generate images of what it though were faces will the autoencoder was trying to reproduce its input, as seen in the images below:

 

After fighting with this for a few months I decided to try to make sure that the GAN was working properly before I added on the autoencoder, and although I had to fight with the GAN quite a bit and was never able to get it to generate really high quality images, I was sure that it was working properly. So I decided to try to hook it up to the autoencoder again.

Then I discovered this paper Autoencoding beyond pixels using a learned similarity metric, which does the same thing I was trying to do but in a much smarter way. What I had been doing was using the MSE between the input and the generated images for my VAE loss, and training both the encoder and the decoder with the GAN loss. Obviously this did not work.

What they do in the paper is basically separate the encoder and leave the decoder and discriminator as the GAN, which is trained as usual. I had tried to think of ways to train the encoder and decoder separately, but my ideas were much more primitive and didn't work at all. What they do that is train the encoder separately, using the KLD loss and - this is the brilliant part - instead of using MSE between the input and the recreation they use the MSE between a feature map from an intermediate layer of the discriminator for the real and faked images. So rather than trying to produce an exact duplicate of the input, the encoder is trying to produce something that the discriminator thinks is close to the input.

It took me a few hours to rewrite my code to make use of this new loss, and come up with a version that would be able to run without having to keep all of the graphs in memory and be able to train in a reasonable amount of time, and I think everything is finally working. Hopefully this works better than my previous attempts, and next time I will try to remember to review the literature before trying to implement a new idea on my own.

Labels: pytorch , autoencoders , gan

1 comment

Archives