نتایج جستجو برای: squared log error loss function
تعداد نتایج: 1839436 فیلتر نتایج به سال:
Wind power has been developed rapidly as a clean energy in recent years. The forecast error of wind power, however, makes it difficult to use wind power effectively. In some former statistical models, the forecast error was usually assumed to be a Gaussian distribution, which had proven to be unreliable after a statistical analysis. In this paper, a more suitable probability density function fo...
The present paper addresses the problem of estimation model parameters logistic exponential distribution based on progressive type-I hybrid censored sample. maximum likelihood estimates are obtained and computed numerically using Newton–Raphson algorithm. Further, Bayes derived under squared error, LINEX generalized entropy loss functions. Two types (independent bivariate) prior distributions c...
We investigate the problem of estimating a random variable Y ∈ Y under a privacy constraint dictated by another correlated random variable X ∈ X , where estimation efficiency and privacy are assessed in terms of two different loss functions. In the discrete case, we use the Hamming loss function and express the corresponding utility-privacy tradeoff in terms of the privacy-constrained guessing ...
The concept of stochastic ordering as introduced by Lehmann (1955) plays a major role in the theory and practice of statistics, and a large body of existing statistical work concerns itself with the problem of estimating distribution functions F and G under the constraint that F (x) ≤ G(x) for all x. Nevertheless in economic theory, the weaker concept of second order stochastic dominance plays ...
We present a series of new theoretical, algorithmic, and empirical results for domain adaptation and sample bias correction in regression. We prove that the discrepancy is a distance for the squared loss when the hypothesis set is the reproducing kernel Hilbert space induced by a universal kernel such as the Gaussian kernel. We give new pointwise loss guarantees based on the discrepancy of the ...
In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d...
This paper compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that pena...
We introduce a joint weighted Neumann series (WNS) and Gauss–Seidel (GS) approach to implement an approximated linear minimum mean-squared error (LMMSE) detector for uplink massive multiple-input multiple-output (M-MIMO) systems. first propose initialize the GS iteration by WNS method, which produces closer-to-LMMSE initial solution than conventional zero vector diagonal-matrix based scheme. Th...
We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semisupervised methods, this approach does not introduce explicit additional assump...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید