Frugal Optimization for Cost-related Hyperparameters

نویسندگان

چکیده

The increasing demand for democratizing machine learning algorithms calls hyperparameter optimization (HPO) solutions at low cost. Many have hyperparameters which can cause a large variation in the training But this effect is largely ignored existing HPO methods, are incapable to properly control cost during process. To address problem, we develop new cost-frugal solution. core of our solution simple but randomized direct-search method, provide theoretical guarantees on convergence rate and total incurred achieve convergence. We strong empirical results comparison with state-of-the-art methods AutoML benchmarks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient-Based Optimization of Hyperparameters

Many machine learning algorithms can be formulated as the minimization of a training criterion that involves a hyperparameter. This hyperparameter is usually chosen by trial and error with a model selection criterion. In this article we present a methodology to optimize several hyperparameters, based on the computation of the gradient of a model selection criterion with respect to the hyperpara...

متن کامل

Optimization of Gaussian process hyperparameters using Rprop

Gaussian processes are a powerful tool for non-parametric regression. Training can be realized by maximizing the likelihood of the data given the model. We show that Rprop, a fast and accurate gradient-based optimization technique originally designed for neural network learning, can outperform more elaborate unconstrained optimization methods on real world data sets, where it is able to converg...

متن کامل

Towards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters

Progress in practical Bayesian optimization is hampered by the fact that the only available standard benchmarks are artificial test functions that are not representative of practical applications. To alleviate this problem, we introduce a library of benchmarks from the prominent application of hyperparameter optimization and use it to compare Spearmint, TPE, and SMAC, three recent Bayesian opti...

متن کامل

Hot Swapping for Online Adaptation of Optimization Hyperparameters

We describe a general framework for online adaptation of optimization hyperparameters by ‘hot swapping’ their values during learning. We investigate this approach in the context of adaptive learning rate selection using an explore-exploit strategy from the multi-armed bandit literature. Experiments on a benchmark neural network show that the hot swapping approach leads to consistently better so...

متن کامل

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i12.17239