Update on TensorFlow GPU Windows Errors

Feb. 16, 2018, 9:07 a.m.

After playing with TensorFlow GPU on Windows for a few days I have more information on the errors. I am running TensorFlow 1.6, currently the latest version, with Python 3.6 and Nvidia CUDA 9.0 on an Nvidia GE Force GT 750M.

When the Python Windows process crashes with an error that says CUDA_ERROR_LAUNCH_FAILED, the problem can be solved by reducing the fraction of the GPU memory available with:

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.7

If the Python script fails with an error about exhausted resources or being unable to allocate enough memory, then you need to use a smaller batch size. This problem does not crash the Python process, Python throws an Exception but does not crash.

Once I figured these out, I have had no problems running models on the GPU at all.

Labels: python , machine_learning , tensorflow

No comments

TensorFlow GPU Errors on Windows

Feb. 15, 2018, 1:50 p.m.

I have been loving TensorFlow lately and have installed tensorflow-gpu on my Windows 10 laptop. Given that the GPU on my laptop is not a really great one I have run into quite a few issues, most of which I have solved. My GPU is an Nvidia GeForce GT 750M with 2GB of RAM and I am running the latest release of tensorflow as of February 2018, with Python 3.6. 

If you are running into errors I would suggest you try these things in this order:

  1. Try reducing the batch size for training AND validation. I always use batches for training but would evaluate on the validation data all at once. By using batches for validation and averaging the results I am able to avoid most of the memory errors.
  2. If this doesn't work try to restrict the amount of GPU RAM available to tensorflow with config.gpu_options.per_process_gpu_memory_fraction = 0.7
    which restricts the amount  available to 70%. Note that I am unable to ever run the GPU with the memory fraction above 0.7
  3. If all else fails turn the GPU off and use the CPU: 
    config = tf.ConfigProto()
    config = tf.ConfigProto(device_count = {'GPU': 0})

The difference between using the CPU and the GPU is like night and day... With the CPU it takes all day to train through 20 epochs, with the GPU the same can be done in a few hours. I think the main roadblock with my GPU is the amount of RAM, which can easily be managed by controlling the batch size and the config settings above. Just remember to feed the config into the session.

Labels: python , data_science , machine_learning , tensor_flow

No comments

Batch Normalization with TensorFlow

Feb. 13, 2018, 1:44 p.m.

I was trying to use batch normalization in order to improve the accuracy of my CIFAR classifier with tf.layers.batch_normalization, and it seemed to have little to no effect. According to this StackOverflow post you need to do something extra, which is not mentioned in the documentation, in order to get the batch normalization to work.

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
sess.run([train_op, extra_update_ops], ...)

The batch norm update operations are added to UPDATE_OPS collection, so you need to create that operation and then feed it into the session along with the training op. Before I had added the extra_update_ops the batch normalization was definitely not running, now it is, whether it helps or not remains to be seen.

Also make sure to use a training=[BOOLEAN | TENSOR] in the call to batch_normalization() to prevent it from being applied during evaluation. I use a placeholder and pass whether it is training or not in via the feed_dict:

training = tf.placeholder(dtype=tf.bool)

And then use this in my batch norm and dropout layers:

training=training

There were a few other things I had to do to get batch normalization to work properly:

  1. I had been using local response normalization, which apparently doesn't help that much. I removed those layers and replaced them with batch normalization layers.
  2. Remove the activation from the conv2d layers. I run the output through the batch normalize layers and then apply the relu.

Before I made these changes the model with the batch normalization didn't seem to be training at all, the accuracy was just going up and down right around the baseline of .10. After these changes it seems to be training properly now.

Labels: data_science , machine_learning , tensor_flow

No comments

Why I Stopped Programming in PHP

Feb. 12, 2018, 12:36 p.m.

A few months ago I went to a university to interview for a job working on their website. Up until that day I had been programming in PHP with Laravel and Symfony. These MVC frameworks are object-oriented, and they make PHP seem like a "real" programming language. I love them for that. Everything is nicely organized and segmented and you can do almost all the programming in an object oriented fashion.

The university was using a CMS and they asked me to take a little test by looking at some of the code. The code was written in non-object-oriented PHP, which means that everything was done in the page. If you need data from the database you do the query right there in the page, do whatever manipulation you need, then loop through the results with everything embedded right in the HTML with <?php tags. 

I was shocked and dismayed looking at the code. It was about as elegant as the BASIC code I was writing when I was 10 years old, but with HTML mixed in for good measure. It was horrifying to realize that this was the language I was programming in, despite the fact that OOP frameworks make it into something that resembles nicely written code.

The other thing that happened that day was they asked me what my ideal job was. I said I would really like to be involved in some sort of research, but there was no need for PHP in that field. When I got home I thought about it for a few hours, then decided that if that was what I wanted to do I should learn whatever I needed to learn to do it. And that is what I did, and that is why I stopped programming in PHP.

Don't get me wrong - I'm not saying PHP is a horrible language, it can be used very well. But the majority of employers here in Switzerland looking for PHP programmers are not looking for people who do it very well, they are looking for people who have done a three month bootcamp and will work for very low pay. For me, it just wasn't worth the effort to do something I don't really even like that much.

Labels: coding , php

No comments

Archives