نتایج جستجو برای: mollifier subgradient
تعداد نتایج: 1200 فیلتر نتایج به سال:
We present an effective algorithm for minimization of locally nonconvex Lipschitz functions based on mollifier functions approximating the Clarke generalized gradient. To this aim, first we approximate the Clarke generalized gradient by mollifier subgradients. To construct this approximation, we use a set of averaged functions gradients. Then, we show that the convex hull of this set serves as ...
A mollifier played a key role in showing N(0)(T) > 1/3N(T) for large T in ref. 1 [Levinson, N. (1974) Advan. Math. 13, 383-436]. A basic problem in ref. 1 was that of obtaining an upper bound for a sum of two terms, one larger than the other. Here a deductive procedure is given for finding a mollifier that actually minimizes the larger term. An Euler-Lagrange equation is obtained. (Optimization...
Let M be a differentiable manifold. We say that a tensor field g defined on M is non-regular if g is in some local L space or if g is continuous. In this work we define a mollifier smoothing gε of g that has the following feature: If g is a Riemannian metric of class C, then the Levi-Civita connection and the Riemannian curvature tensor of gε converges to the Levi-Civita connection and to the R...
Abstract. Motivated by classical vortex blob methods for the Euler equations, we develop a numerical blob method for the aggregation equation. This provides a counterpoint to existing literature on particle methods. By regularizing the velocity field with a mollifier or “blob function”, the blob method has a faster rate of convergence and allows a wider range of admissible kernels. In fact, we ...
Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated the application of (stochastic) subgradient methods to machine learning with a running example of tr...
In this paper, we present a new approach for solving absolute value equation (AVE) whichuse Levenberg-Marquardt method with conjugate subgradient structure. In conjugate subgradientmethods the new direction obtain by combining steepest descent direction and the previous di-rection which may not lead to good numerical results. Therefore, we replace the steepest descentdir...
This paper describes the Subgradient Marching algorithm to compute the derivative of the geodesic distance with respect to the metric. The geodesic distance being a concave function of the metric, this algorithm computes an element of the subgradient in O(N log(N)) operations on a discrete grid ofN points. It performs a front propagation that computes the subgradient of a discrete geodesic dist...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید