نتایج جستجو برای: gradient descent algorithm
تعداد نتایج: 869527 فیلتر نتایج به سال:
<span>Gradient descent bit flipping (GDBF) and its many variants have offered remarkable improvements over legacy, or modified, decoding techniques in case of low density parity check (LDPC) codes. GDBF method variants, such as noisy gradient (NGDBF) been extensively studied their performances assessed multiple channels binary symmetric channel (BSC), erasure (BEC) additive white Gaussian...
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and a...
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). In general, “fast gradient” methods have provable improvements over gradient descent only for...
We propose proximal backpropagation (ProxProp) as a novel algorithm that takes implicit instead of explicit gradient steps to update the network parameters during neural network training. Our algorithm is motivated by the step size limitation of explicit gradient descent, which poses an impediment for optimization. ProxProp is developed from a general point of view on the backpropagation algori...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید