نتایج جستجو برای: differentiable lipschitzspaces

تعداد نتایج: 6714  

2018
Sebastian Tschiatschek Aytunc Sahin Andreas Krause

We consider learning of submodular functions from data. These functions are important in machine learning and have a wide range of applications, e.g. data summarization, feature selection and active learning. Despite their combinatorial nature, submodular functions can be maximized approximately with strong theoretical guarantees in polynomial time. Typically, learning the submodular function a...

Journal: :CoRR 2017
Carlos Martin

We describe a class of cellular automata (CAs) that are end-to-end differentiable. DCAs interpolate the behavior of ordinary CAs through rules that act on distributions of states. The gradient of a DCA with respect to its parameters can be computed with an iterative propagation scheme that uses previously-computed gradients and values. Gradient-based optimization over DCAs could be used to find...

2010
JOHN P. HOLMES Jonathan M. Rosenberg

A differentiable semigroup is a topological semigroup (5, *) in which 5 is a differentiable manifold based on a Banach space and the associative multiplication function * is continuously differentiable. If e is an idempotent element of such a semigroup we show that there is an open set U containing e so that there is a C retraction of U into the set of idempotents of S so that (x)<$(y) =...

2007
E. E. FLOYD

1. The bordism groups. This note presents an outline of the authors' efforts to apply Thorn's cobordism theory [ó] to the study of differentiable periodic maps. First, however, we shall outline our scheme for computing the oriented bordism groups of a space [ l ] . These preliminary remarks bear on a problem raised by Milnor [4]. A finite manifold is the finite disjoint union of compact connect...

2008
BERND KIRCHHEIM

A real-valued continuously differentiable function f on the unit interval is constructed such that ∞ ∑ k=1 βf (x, 2 −k) = ∞ holds for every x ∈ [0, 1]. Here βf (x, 2−k) measures the distance of f to the best approximating linear function at scale 2−k around x.

2017
Dario Izzo Francesco Biscani Alessio Mereta

We introduce the use of high order automatic differentiation, implemented via the algebra of truncated Taylor polynomials, in genetic programming. Using the Cartesian Genetic Programming encoding we obtain a high-order Taylor representation of the program output that is then used to back-propagate errors during learning. The resulting machine learning framework is called differentiable Cartesia...

2008
J. Andrew Bagnell David M. Bradley

Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Addi...

Journal: :J. Global Optimization 2017
Kamil A. Khan Harry A. J. Watson Paul I. Barton

McCormick’s classical relaxation technique constructs closed-form convex and concave relaxations of compositions of simple intrinsic functions. These relaxations have several properties which make them useful for lower bounding problems in global optimization: they can be evaluated automatically, accurately, and computationally inexpensively, and they converge rapidly to the relaxed function as...

2006
Kai Behrend Ping Xu

We introduce differentiable stacks and explain the relationship with Lie groupoids. Then we study S-bundles and S-gerbes over differentiable stacks. In particular, we establish the relationship between S-gerbes and groupoid S-central extensions. We define connections and curvings for groupoid S-central extensions extending the corresponding notions of Brylinski, Hitchin and Murray for S-gerbes ...

2018
Lisa Lee Emilio Parisotto Devendra Singh Ruslan Salakhutdinov

Our motivation is to scale value iteration to larger environments without a huge increase in computational demand, and fix the problems inherent to Value Iteration Networks (VIN) such as spatial invariance and unstable optimization. We show that VINs, and even extended VINs which improve some of their shortcomings, are empirically difficult to optimize, exhibiting instability during training an...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید