Optimizing AI-based HEP algorithms using HPC and Quantum Computing

Hyperparameter Optimization (HPO) can be used to systematically explore the search space of hyperparameter configurations of Deep Learning (DL) models. Current state-of-the-art HPO algorithms such as Hyperband, ASHA, and BOHB, rely on a method of early termination. Badly performing trials are automatically terminated allocating more computing resources to more promising ones. Support vector regression (SVR) models can be used to predict the loss of several NN architectures based on their partial learning curves. Via CoE RAISE, we accessed the Quantum Annealer at the Jülich Supercomputer Centre, to train Q-SVR models on MLPF learning curves Baker et al. proposed the sequential algorithm Fast-Hyperband, a modified version of Hyperband that adds an additional decision point for every epoch inside each Hyperband round. Performance prediction is used for the extra decision points. We propose Swift-Hyperband, a new way to integrate performance prediction with Hyperband. Our approach requires training far fewer performance predictors than Fast-Hyperband and is also easily parallelizable. Multiple trainings can be carried out simultaneously on different nodes within a round. As a result, Swift-Hyperband has the potential to use Q-SVRs and benefit from HPC environments.

keywords: High Energy Physics, SRV, Quantum SRV, Hyperparameter Optimization