Les billets libellés: machine_learning. Afficher tous les billets.

VAE GAN

dimanche 08 décembre 2019

I had been trying to train a version of VAE-GAN for a few weeks and it wasn't working as well as I had hoped it would. I had added an auxiliary output to the discriminator which was attempting to predict the 40 features of each image provided with the celeb-a dataset as suggested in the VAE-GAN paper and I was scaling that loss to try to bring it in line with the GAN discriminator loss, but I was doing that incorrectly so that loss ended up overwhelming the GAN loss. (I was summing, rather than averaging the losses, and the lambda I was using to scale the loss was appropriate for a mean loss, but with 40 features the auxiliary loss was 40x the GAN loss at base, so I needed to divide the lambda by 40 to get the effect I wanted.)

After having corrected that error I am finally making some progress with these models. Below are sample images from two models I am training. The first outputs images at 160x160, the second at 128x128.

I guess the moral of this story is if something isn't working the way you expect it to, double check your math before you continue training it!

Libellés: python, machine_learning, pytorch, gan
3 commentaires

GAN Hacks

samedi 21 septembre 2019

I've now been trying to train my GANs for quite a while and still haven't been too successful, but I have learned some tricks. I found this excellent article a while ago and I didn't really understand it completely at first, but after having tried a lot of its tricks I understand them now. Here are my thoughts and some additional tricks I have used:

  1. Item 5 from the article - use convolutional layers with strides of 2 rather than pools : one of the biggest problems in training GANs is maintaining the gradients. Since the gradients for the generator come from the discriminator vanishing or exploding gradients are a huge problem and need to be avoided at all costs. Max pools eliminate all of the gradients but one, so convolutional layers with a stride of 2 are a better way to downsample. Average pooling will also work, but I've found that stride 2 layers work better.
  2. With apologies to Frank Herbert, "the gradients must flow." I've had luck using dense convnets as the discriminator because of the improved gradient flow they provide.
  3. Item 6 - soft and noisy labels - this has helped a LOT. I haven't tried using random labels, but I have had luck using labels that are slightly off from 0 or 1, like 0.1 or 0.99. This keeps the discriminator from becoming too confident in it's predictions and the gradient to the generator exploding. I've learned that when training GANS, exploding gradients are just as bad as vanishing gradients in that the generator learns nothing.
  4. The article also suggests occassionally flipping the labels, which I'm not sure exactly how to interpret. In practice, if the discriminator gets too strong I will occassionally flip the labels for a few training steps to confuse it a bit and then flip them back. This seems to help the generator catch up a bit.
  5. One other thing I have found is that using smaller batch sizes seems to work better. When I started using the V100 GPUs I immediately increased my batch size to the max the GPU could handle, but the generator did not learn well at all. Reducing the batch size helps a lot, possibly by introducing some additional regularization to the discriminator. 
  6. Dropout - the article mentions using dropout in the generator, which I haven't tried. I do use dropout in the discriminator, which I wasn't sure about since it will reduce the gradients, but it does help slow down the discriminator which seems to help training.
  7. Item 11 - I have tried to do this and wasted a lot of time. If your training has collapsed it is not likely you will be able to uncollapse it by training one network more than the other. I would suggest that rather than training one network more than the other you make sure that the networks are roughly equally matched from the start. Training the generator more, for example, tends to lead to mode collapse; training the discriminator more tends to lead to the gradients exploding or vanishing.
  8. Item 12 - I haven't tried this one yet but it is interesting. I've heard a lot about using auxiliary outputs to provide regularization and if I had labelled images I would definitely try this one. In fact, I may try to label my images somehow in order to do so.
  9. One thing that was not mentioned in the article, but which I have found very helpful, if using separate batches for real and fake images when training the discriminator. At first I thought this was a bizarre idea and wasn't sure how it would help, but it really does.

Some additional tips on how to construct a GAN:

  1. Start small - when I started playing with GANs I immediately made two large, deep convnets and tried to train them and they learned nothing. I recommend you start with a very small network, train it enough to make sure it is learning something, then add a layer and repeat. I still don't know what the problem with my original networks was, or if I just wasn't patient enough, but it's a lot easier to find problems if you add one layer at a time (or one block at a time) than if you start off with a 100 layer network.
  2. Keep things simple - training a GAN involves making sure that two networks are roughly learning at the same pace, it's a delicate dance and I would recommend not throwing too many bells and whistles into it. As in the previous tip, make sure everything is working properly first before you add some newfangled loss function or dynamic loss weighted or anything into it.

 

Libellés: machine_learning, gan
Aucun commentaire

K80 vs V100

lundi 16 septembre 2019

Discovering how much cheaper spot EC2 instances were than normal on-demand instances gave me the courage to try out a faster GPU. I had been using K80s which are painfully slow, but very cheap. The spot price for the V100 is about the same as the on-demand price of the K80s, so using those with spot instances won't be any cheaper, but it won't be more expensive either.

I didn't think the V100s were such great GPUs, so I wasn't expecting it to be worth the extra cost. How wrong I was. Training the network I am currently playing with on a K80 with a batch size of 48 took about 8-12 hours per epoch. Training it on a V100 with a batch size of 64 is looking like it's going to take about 2 hours. With the V100s priced at about 4x the K80s, that works out to about the same price per compute to a little bit cheaper, depending on exactly how long it took per epoch on the K80.

When you factor in the value of not having to wait an entire day to see the results of an epoch, this is a no-brainer as far as I'm concerned. Unfortunately, I'm sure my AWS bill is going to increase substantially. That's how they get you... Once you have a taste of HPC they know you'll be back for more...

Libellés: machine_learning, ec2, aws, gpu
Aucun commentaire

AWS EC2 Spot Instances

jeudi 12 septembre 2019

My major complaint about using EC2 GPU instances was the cost, it gets very expensive to run a GPU instance for more than a few hours. Last week I was wondering why I wasn't using spot instances, so I set up a request and I've been running it for a few days now. It is about 1/4 the price of a normal instance, so it's not much more expensive than renting a CPU-only on-demand instance. I was hoping to get a better GPU than the K80, but I ended up settling for the K80 because it was more available than the better GPUs, but next time I may request a better one and see what happens.

The downside of spot instances is that they will be terminated if the capacity is needed for an on-demand instance, and my instance was terminated the other night. But then I spun up a new one in the morning and that one has been running for a few days now. I can't believe I haven't used these before.

Libellés: machine_learning, aws
Aucun commentaire

It is difficult to play around with the structure for the GAN I am working on in Colab since it trains so slowly. I can usually get maybe 2 or 3 epochs in a day, which means that I need to wait a day before evaluating each change I make. I decided to rent a GPU in the cloud for a few days so I could train it a bit more quickly and figure out what works and what doesn't work before going back to Colab.

I already have a Google Cloud GPU instance I was using for my work with mammography, but it was running CUDA 9.0 which apparently is not supported by PyTorch out of the box. I tried to upgrade CUDA to 10, but I think I ended up just making things worse. Rather than spend a whole day trying to fix the GCS instance, and since I have some AWS credits, I decided to try to use an AWS Deep Learning AMI instance, which already has everything configured.

It was incredibly easy to get set up, it comes pre-configured with virtual environments for different deep learning frameworks and packages, so there is no need to install CUDA or drivers or anything like that, which is a huge advantage, since back when I was setting up the GCS instance it took me a few days to get everything installed and working. One thing I quickly noticed was that the default disk size was not even close to big enough - after downloading a few data files I was already running out of disk space, but it was very easy to increase the disk size.

Then all I had to do was activate the pytorch environment, launch a notebook and everything was running smoothly. I did run into a few minor issues, none of which were difficult to resolve:

  • If I launch tmux from within a virtual environment it launches a session that does NOT have the environment activated. Then if I activate the environment from within tmux it doesn't have access to the proper modules. This was resolved by launching tmux from outside of the venv, and then activating the venv from inside tmux.
  • In my notebook it didn't seem to have access to pytorch, but this was because I hadn't selected the proper kernel from the kernel -> change kernel menu. I wasn't even aware that one could select the kernel like that.

I used to prefer GCS to AWS because it was more configurable and easier to use. While AWS does have a bit of a learning curve, they really have thought of and provided for just about every possible contingency. We use AWS at my work, and it really is very impressive. I still like the simplicity of GCS, but even simple things like AMIs make such a huge difference in set-up time that I think I'll be using AWS more often now.

Libellés: machine_learning, aws, pytorch
Aucun commentaire

Archives du Blogue