نتایج جستجو برای: regression modelling bayesian regularization neural network

تعداد نتایج: 1338314  

Journal: :تحقیقات آب و خاک ایران 0
ایمان جوادزرین کارشناس ارشد، گروه مهندسی علوم خاک، پردیس کشاورزی و منابع طبیعی، دانشگاه تهران. بابک متشرع زاده دانشیار گروه مهندسی علوم خاک، پردیس کشاورزی و منابع طبیعی، دانشگاه تهران

the aim followed in this study was to compare the performance of multiple regression vs neural network models to predict the activity of antioxidant enzymes super oxide dismutase (sod), cat alase (cat), ascorbate pero xidase (apx) and peroxidase (pox) in the shoots of wheat (triticum aestivum), alvand cultivar in a soil polluted with cadmium. the treatments consisted of four levels of cadmium (...

1998
C K I WILLIAMS

The main aim of this paper is to provide a tutorial on regression with Gaussian processes We start from Bayesian linear regression and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions rather than on priors over parameters This leads in to a more general discussion of Gaussian processes in section Section deals with further ...

1997
C K I Williams

The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with...

2000
JANI LAHNAJÄRVI MIKKO LEHTOKANGAS

In this paper we present a regularization approach to the training of all the network weights in cascadecorrelation type constructive neural networks. Especially, the case of regularizing the output neuron of the network is presented. In this case, the output weights are trained by employing a regularized objective function containing a penalty term which is proportional to the weight values of...

2011
Peiran Gao

Inferring network structure from observed data is a useful procedure to study the relation between structure and function networks. For networks with observable dynamics but hidden structure, inference gives the best guess of the underlying connectivity that explains the observed data. For networks with known structure and observable dynamics, inference helps to separate parts of the network th...

2006
S. Deng Y. Hwang

This paper employs the continuous-time analogue Hopfield neural network to compute the temperature distribution in forward heat conduction problems and solves inverse heat conduction problems by using a back propagation neural (BPN) network to identify the unknown boundary conditions. The weak generalization capacity of BPN networks is improved by employing the Bayesian regularization algorithm...

2008
Henrique S. Hippert James W. Taylor Henrique Hippert

Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. However, there are still no widely accepted strategies for designing the models and for implementing them, which makes the process of modelling by neural networks largely heuristic, dependent on the experience of the...

Journal: :Neural networks : the official journal of the International Neural Network Society 2010
Henrique S. Hippert James W. Taylor

Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for auto...

Journal: :CoRR 2013
David A. McAllester

This tutorial gives a concise overview of existing PAC-Bayesian theory focusing on three generalization bounds. The first is an Occam bound which handles rules with finite precision parameters and which states that generalization loss is near training loss when the number of bits needed to write the rule is small compared to the sample size. The second is a PAC-Bayesian bound providing a genera...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید