نتایج جستجو برای: while tsvd produces a sparse model

تعداد نتایج: 13806443  

Journal: :The Journal of neuroscience : the official journal of the Society for Neuroscience 2012
Jan Clemens Sandra Wohlgemuth Bernhard Ronacher

Sparse coding schemes are employed by many sensory systems and implement efficient coding principles. Yet, the computations yielding sparse representations are often only partly understood. The early auditory system of the grasshopper produces a temporally and population-sparse representation of natural communication signals. To reveal the computations generating such a code, we estimated 1D an...

Mahdi Roozbeh, Monireh Maanavi,

Background and purpose: Machine learning is a class of modern and strong tools that can solve many important problems that nowadays humans may be faced with. Support vector regression (SVR) is a way to build a regression model which is an incredible member of the machine learning family. SVR has been proven to be an effective tool in real-value function estimation. As a supervised-learning appr...

Journal: :Physical review. E, Statistical, nonlinear, and soft matter physics 2005
P L Krapivsky S Redner

We introduce a growing network model in which a new node attaches to a randomly selected node, as well as to all ancestors of the target node. This mechanism produces a sparse, ultrasmall network where the average node degree grows logarithmically with network size while the network diameter equals 2. We determine basic geometrical network properties, such as the size dependence of the number o...

2018
G'erard Biau Benoit Cadre Laurent Rouviere LPSM UMR 8001 IRMAR

Gradient tree boosting is a prediction algorithm that sequentially produces a model in the form of linear combinations of decision trees, by solving an infinite-dimensional optimization problem. We combine gradient boosting and Nesterov’s accelerated descent to design a new algorithm, which we call AGB (for Accelerated Gradient Boosting). Substantial numerical evidence is provided on both synth...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه صنعتی اصفهان - دانشکده ریاضی 1390

abstract: in the paper of black and scholes (1973) a closed form solution for the price of a european option is derived . as extension to the black and scholes model with constant volatility, option pricing model with time varying volatility have been suggested within the frame work of generalized autoregressive conditional heteroskedasticity (garch) . these processes can explain a number of em...

ژورنال: پژوهش های ریاضی 2015
Hosseni , S.M, Keshvari , A.R.,

A new technique to find the optimization parameter in TSVD regularization method is based on a curve which is drawn against the residual norm [5]. Since the TSVD regularization is a method with discrete regularization parameter, then the above-mentioned curve is also discrete. In this paper we present a mathematical analysis of this curve, showing that the curve has L-shaped path very similar t...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه مازندران 1388

target tracking is the tracking of an object in an image sequence. target tracking in image sequence consists of two different parts: 1- moving target detection 2- tracking of moving target. in some of the tracking algorithms these two parts are combined as a single algorithm. the main goal in this thesis is to provide a new framework for effective tracking of different kinds of moving target...

Journal: :Neural computation 2005
David B. Grimes Rajesh P. N. Rao

Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We sh...

2011
Robert Moore John DeNero

This paper investigates the relationship between the loss function, the type of regularization, and the resulting model sparsity of discriminatively-trained multiclass linear models. The effects on sparsity of optimizing log loss are straightforward: L2 regularization produces very dense models while L1 regularization produces much sparser models. However, optimizing hinge loss yields more nuan...

2006
Marcel Katz Sven E. Krüger Martin Schafföner Edin Andelic Andreas Wendemuth

In this paper we investigate two discriminative classification approaches for frame-based speaker identification and verification, namely Support Vector Machine (SVM) and Sparse Kernel Logistic Regression (SKLR). SVMs have already shown good results in regression and classification in several fields of pattern recognition as well as in continuous speech recognition. While the non-probabilistic ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید