A methodology for tuning hyperparameters without overfitting.

Divide the dataset into 3 partitions: training, validation and test. Use the training data for learning parameters and evaluate the hyperparameters using the validation set. Use the test set at very last only once as it would represent general data.

Exhuastive cross-validation

Leave-p-out cross-validation

Leave-one-out cross-validation

Non-exhuastive cross-validation

k-fold cross-validation

alternate text

< An image from a Stanford course, Neural Networks for Visual Recognition >