نتایج جستجو برای: learning based optimization
تعداد نتایج: 3499596 فیلتر نتایج به سال:
In order to solve the problem of image color recognition, this paper proposes a method recognition and optimization based on deep learning designs postprocessing framework word bag model (bow). The uses CNN features calculates feature similarity. sets with high similarity are input into classifier trained by bow clustering as preliminary retrieval results. results categories largest number imag...
Self-organizing map SOM neural networks have been widely applied in information sciences. In particular, Su and Zhao proposes in 2009 an SOM-based optimization SOMO algorithm in order to find a wining neuron, through a competitive learning process, that stands for the minimum of an objective function. In this paper, we generalize the SOM-based optimization SOMO algorithm to so-called SOMO-m alg...
An intelligent decision guidance system which is composed of data collection, learning, optimization, and prediction is proposed in the paper. Built on the traditional relational database management system, the regression learning ability is incorporated. The Expectation Maximization Multi-Step Piecewise Surface Regression Learning (EMMPSR) algorithm is proposed to solve piecewise surface regre...
This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machi...
In recent years, attention has been focused on the relationship between black box optimization and reinforcement learning. Black box optimization is a framework for the problem of finding the input that optimizes the output represented by an unknown function. Reinforcement learning, by contrast, is a framework for finding a policy to optimize the expected cumulative reward from trial and error....
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the PolakRibiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real...
Optimization is one of the most important issues in all fields of science and engineering. There are two main categories for optimization problems: continues optimization and discrete optimization. Traditional methods, such as gradient descent, are used for solving continues optimization problems, But for discrete optimization, traditional and many new algorithms are introduced. Due to long tim...
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is the optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing the “experiences” to construct a prediction model via statistical machine learning approaches. However most of the existing methods ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید