نتایج جستجو برای: روش انقباضی lasso

تعداد نتایج: 374444  

2012
Wei Qian Yuhong Yang

The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matr...

Journal: :Academic Emergency Medicine 2009

2013
Fabian L. Wauthier Nebojsa Jojic Michael I. Jordan

The Lasso is a cornerstone of modern multivariate data analysis, yet its performance suffers in the common situation in which covariates are correlated. This limitation has led to a growing number of Preconditioned Lasso algorithms that pre-multiply X and y by matrices PX , Py prior to running the standard Lasso. A direct comparison of these and similar Lasso-style algorithms to the original La...

ژورنال: :پژوهش فیزیک ایران 0
فرشاد قاسمی f ghasemi 1. گروه کاربرد پرتوها، دانشکده مهندسی هسته ای، دانشگاه شهید بهشتی فریدون عباسی دوانی f abasidavani 1. گروه کاربرد پرتوها، دانشکده مهندسی هسته ای، دانشگاه شهید بهشتی محمد لامعی رشتی m lamehirachti 2. پژوهشکده فیزیک و شتابگرها، پژوهشگاه دانش های بنیادی ساسان احمدیان نمینی s ahmadiannamini 1. گروه کاربرد پرتوها، دانشکده مهندسی هسته ای، دانشگاه شهید بهشتی

هدف پروژه شتاب دهنده خطی الکترون پژوهشگاه دانش های بنیادی، طراحی شتاب دهنده ای است که تا حد ممکن اجزای مختلف آن در ایران ساخته شود.این شتاب دهنده از نوع موج رونده است. بررسی انجام گرفته نشان می دهد که انواع روش های شکل دهی و اتصال در ساخت کاواک های تیوب شتاب دهی وجود دارند. انتخاب روش انقباض در ساخت تیوب شتاب دهی پروژه موردنظر با الگو برداری از شتاب دهنده مارک 3 در دانشگاه استنفورد بوده است. با...

2011
Kotaro Kitagawa Kumiko Tanaka-Ishii

Relational lasso is a method that incorporates feature relations within machine learning. By using automatically obtained noisy relations among features, relational lasso learns an additional penalty parameter per feature, which is then incorporated in terms of a regularizer within the target optimization function. Relational lasso has been tested on three different tasks: text categorization, ...

2007
Peng Zhao Bin Yu Saharon Rosset

Many statistical machine learning algorithms (in regression or classification) minimize either an empirical loss function as in AdaBoost, or a penalized empirical loss as in SVM. A single regularization tuning parameter controls the trade-off between fidelity to the data and generalibility, or equivalently between bias and variance. When this tuning parameter changes, a regularization “path” of...

2017
Nils Ternès Federico Rotolo Georg Heinze Stefan Michiels

Stratified medicine seeks to identify biomarkers or parsimonious gene signatures distinguishing patients that will benefit most from a targeted treatment. We evaluated 12 approaches in high-dimensional Cox models in randomized clinical trials: penalization of the biomarker main effects and biomarker-by-treatment interactions (full-lasso, three kinds of adaptive lasso, ridge+lasso and group-lass...

2006
Hui ZOU

The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable selection. We then propose a new version of ...

2012
Ryan J. Tibshirani Jonathan Taylor

We derive the degrees of freedom of the lasso fit, placing no assumptions on the predictor matrix X. Like the well-known result of Zou et al. (2007), which gives the degrees of freedom of the lasso fit when X has full column rank, we express our result in terms of the active set of a lasso solution. We extend this result to cover the degrees of freedom of the generalized lasso fit for an arbitr...

2012
Marius Kwemou

We consider the problem of estimating a function f0 in logistic regression model. We propose to estimate this function f0 by a sparse approximation build as a linear combination of elements of a given dictionary of p functions. This sparse approximation is selected by the Lasso or Group Lasso procedure. In this context, we state non asymptotic oracle inequalities for Lasso and Group Lasso under...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید