Quick extreme learning machine for large-scale classification

The extreme learning machine (ELM) is a method to train single-layer feed-forward neural networks that became popular because it uses a fast closed-form expression for training that minimizes the training error with good generalization ability to new data. The ELM requires the tuning of the hidden layer size and the calculation of the pseudo-inverse of the hidden layer activation matrix for the whole training set. With large-scale classification problems, the computational overload caused by tuning becomes not affordable, and the activation matrix is extremely large, so the pseudo-inversion is very slow and eventually the matrix will not fit in memory. The quick extreme learning machine (QELM), proposed in the current paper, is able to manage large classification datasets because it: (1) avoids the tuning by using a bounded estimation of the hidden layer size from the data population; and (2) replaces the training patterns in the activation matrix by a reduced set of prototypes in order to avoid the storage and pseudo-inversion of large matrices. While ELM or even the linear SVM cannot be applied to large datasets, QELM can be executed on datasets up to 31 million data, 30,000 inputs and 131 classes, spending reasonable times (less than 1 h) in general purpose computers without special software nor hardware requirements and achieving performances similar to ELM.

Palabras clave: Extreme learning machine, Classification, Large-scale datasets, Model Selection