نتایج جستجو برای: logistic sigmoid function

تعداد نتایج: 1318358  

2012
Piotr Romanowski

First, a process of building of the neural network for events forecasting is presented, that is the selection of networks’ architecture and parameters. Next, the effect of adding data calibrated by nonlinear input function to input data calibrated linearly is described. The nonlinear input function hyperbolic tangent was accepted. Hyperbolic tangent sigmoid transfer function and log sigmoid tra...

Journal: :IEEE transactions on cybernetics 2021

In this paper, a novel particle swarm optimization (PSO) algorithm is put forward where sigmoid-function-based weighting strategy developed to adaptively adjust the acceleration coefficients. The newly proposed adaptive takes into account both distances from global best position and its personal position, thereby having distinguishing feature of enhancing convergence rate. Inspired by activatio...

Journal: :Optics letters 1996
D Pignon P J Parmiter J K Slack M A Hands T J Hall M van Daalen J Shawe-Taylor

An experiment using the phenomenon of percolation has been conducted to demonstrate the implementation of neural functionality (summing and sigmoid transfer). A simple analog approximation to digital percolation is implemented. The device consists of a piece of amorphous silicon with stochastic bit-stream optical inputs, in which a current percolating from one end to the other defines the neuro...

Journal: :The journal of physical chemistry. B 2012
Oleksandr Zavalov Vera Bocharova Vladimir Privman Evgeny Katz

The first realization of a biomolecular OR gate function with double-sigmoid response (sigmoid in both inputs) is reported. Two chemical inputs activate the enzymatic gate processes, resulting in the output signal: chromogen oxidation, which occurs when either one of the inputs or both are present (corresponding to the OR binary function), and can be optically detected. High-quality gate functi...

2005
H. S. KAM S. N. CHEONG

This paper presents a Gaussian noise filter that uses a sigmoid shaped membership function to model image information in the spatial domain. This function acts as a tunable smoothing intensification operator. With a proper choice of two sigmoid parameters ‘t’ and ‘a’, the filter strength can be tuned for removal of Gaussian noise in intensity images. An image information measure, Total Compatib...

Journal: :Appl. Soft Comput. 2014
Christopher J. K. Knight David J. B. Lloyd Alexandra S. Penn

Fuzzy cognitive mapping is commonly used as a participatory modelling technique whereby stakeholders create a semi-quantitative model of a system of interest. This model is often turned into an iterative map, which should (ideally) have a unique stable fixed point. Several methods of doing this have been used in the literature but little attention has been paid to differences in output such dif...

Journal: :CoRR 2016
François Chollet

We present a method for training multi-label, massively multi-class image classification models, that is faster and more accurate than supervision via a sigmoid cross-entropy loss (logistic regression). Our method consists in embedding high-dimensional sparse labels onto a lower-dimensional dense sphere of unit-normed vectors, and treating the classification problem as a cosine proximity regres...

Journal: :CoRR 2018
Elad Tolochinsky Dan Feldman

Coreset (or core-set) in this paper is a small weighted subset Q of the input set P with respect to a given monotonic function f : R → R that provably approximates its fitting loss ∑ p∈P f(p · x) to any given x ∈ R. Using Q we can obtain approximation to x∗ that minimizes this loss, by running existing optimization algorithms on Q. We provide: (i) a lower bound that proves that there are sets w...

Journal: :CoRR 2018
Haoyu Fu Yuejie Chi Yingbin Liang

We study the local geometry of a one-hidden-layer fully-connected neural network where the training samples are generated from a multi-neuron logistic regression model. We prove that under Gaussian input, the empirical risk function employing quadratic loss exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, for a class of smooth activation functions ...

1995
David P. Helmbold Jyrki Kivinen Manfred K. Warmuth

We analyze and compare the well-known Gradient Descent algorithm and a new algorithm, called the Exponentiated Gradient algorithm, for training a single neuron with an arbitrary transfer function . Both algorithms are easily generalized to larger neural networks, and the generalization of Gradient Descent is the standard back-propagation algorithm. In this paper we prove worstcase loss bounds f...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید