Deep Learning with Theano
上QQ阅读APP看书,第一时间看更新

Structure of a training program

The structure of a training program always consists of the following steps:

  1. Set the script environment: Such as package imports, the use of the GPU, and so on.
  2. Load data: A data loader class to access the data during training, usually in a random order to avoid too many similar examples of the same class, but sometimes in a precise order, for example, in the case of curriculum learning with simple examples first and complex ones last.
  3. Preprocess the data: A set of transformations, such as swapping dimensions on images, adding blur or noise. It is very common to add some data augmentation transformations, such as random crop, scale, brightness, or contrast jittering to get more examples than the original ones, and reduce the risk of overfitting on data. If the number of free parameters in the model is too important with respect to the training dataset size, the model might learn from the available examples. Also, if the dataset is too small and too many iterations have been executed on the same data, the model might become too specific to the training examples and not generalize well on new unseen examples.
  4. Build a model: Defining the model structure with the parameter in persistent variables (shared variables) to update their values during training in order to fit the training data
  5. Train: There are different algorithms either training on the full dataset as a whole or training on each example step by step. The best convergence is usually achieved by training on a batch, a small subset of examples grouped together, from a few tens to a few hundreds.

    Another reason to use a batch is to improve the training speed of the GPU, because individual data transfers are costly and GPU memory is not sufficient to host the full dataset as well. The GPU is a parallel architecture, so processing a batch of examples is usually faster than processing the examples one by one, up to a certain point. Seeing more examples at the same time accelerates the convergence (in wall-time), up to a certain point. This is true even if the GPU memory is large enough to host the whole dataset: the diminishing returns on the batch size make it usually faster to have smaller batches than the whole dataset. Note that this is true for modern CPUs as well, but the optimal batch size is usually smaller.

    Note

    An iteration defines a training on one batch. An epoch is a number of iterations required for the algorithm to see the full dataset.

  6. During training, after a certain number of iterations, there is usually a validation using a split of the training data or a validation dataset that has not been used for learning. The loss is computed on this validation set. Though the algorithm has the objective to reduce the loss given the training data, it does not ensure generalization with unseen data. Validation data is unseen data used to estimate the generalization performance. A lack of generalization might occur when the training data is not representative, or is an exception and has not been sampled correctly, or if the model overfits the training data.

    Validation data verifies everything is OK, and stops training when validation loss does not decrease any more, even if training loss might continue to decrease: further training is not worth it any more and leads to overfitting.

  7. Saving model parameters and displaying results, such as best training/validation loss values, train loss curves for convergence analysis.

    In the case of classification, we compute the accuracy (the percentage of correct classification) or the error (the percentage of misclassification) during training, as well as the loss. At the end of training, a confusion matrix helps evaluate the quality of the classifier.

    Let's see these steps in practice and start a Theano session in a Python shell session:

    from theano import theano
    import theano.tensor as T