A simple and fast method, named ideal kernel tuning, is proposed to select the radial basis kernel spread of the support vector machine for classification. The spread is selected directly from data, without any training nor test, in order to bring the kernel matrix nearer to the ideal kernel matrix, with values one for patterns of the same class and zero otherwise. To avoid scaling with the training set size, the kernel matrix is calculated for a small set of class prototypes. The selected spread can be used also for multi-class classification problems considering the two most populated classes. Compared to other 5 popular tuning algorithms, the proposed approach is a smart and efficient strategy whose performance is very near to the state-of-the-art, is 2-4 orders of magnitude faster and requires very little memory, scales better with the dataset size both in time and memory, and is able to classify medium-size datasets up to 70,000 training patterns, where other methods fail.
Keywords: support vector machine (SVM), Classification, Model selection, RBF kernel, Spread tuning,