نتایج جستجو برای: relaxed
تعداد نتایج: 14944 فیلتر نتایج به سال:
In this paper we deal with the von Neumann alternating projection method xk+1 = PAPBxk and with its generalization of the form xk+1 = PA(xk + k(PAPBxk xk)), where A;B are closed and convex subsets of a Hilbert space H and FixPAPB 6= ?. We do not suppose that A \ B 6= ?. We give su¢ cient conditions for the weak convergence of the sequence (xk) to FixPAPB in the general case and in the case A is...
Motivated by biological questions, we study configurations of equal-sized disks in the Euclidean plane that neither pack nor cover. Measuring the quality by the probability that a random point lies in exactly one disk, we show that the regular hexagonal grid gives the maximum among lattice configurations.
We study a new family of geometric graphs that interpolate between the Delaunay triangulation and the Gabriel graph. These graphs share many properties with βskeletons for β ∈ [0, 1] (such as sublinear spanning ratio) with the added benefit of planarity (and consequently linear size and local routability).
Approximate program transformations such as task skipping [27, 28], loop perforation [20, 21, 32], multiple selectable implementations [3, 4, 15], approximate function memoization [10], and approximate data types [31] produce programs that can execute at a variety of points in an underlying performance versus accuracy tradeoff space. Namely, these transformed programs trade accuracy of their re...
Heuristics based on the delete relaxation are at the forefront of modern domain-independent planning techniques. Here we introduce a principled and flexible technique for augmenting delete-relaxed tasks with a limited amount of delete information, by introducing special fluents that explicitly represent conjunctions of fluents in the original planning task. Differently from previous work in thi...
diag(u)Kdiag(v). We have T ⇤1 = diag(u)Kv ) (T ⇤1) a +1 ↵ h(x) a = Kv where we substituted the expression for u. Re-writing T ⇤1, (diag(u)Kv) a +1 = diag(h(x) a )Kv )u a +1 = h(x) a (Kv) a )u = h(x) a a +1 (Kv) a a +1 . A symmetric argument shows that v = y b b +1 (K>u) b b +1 . B Statistical Learning Bounds We establish the proof of Theorem 5.1 in this section. For simpler notation, for a sequ...
Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice. A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the de...
For weakly-supervised problems with deterministic constraints between the latent variables and observed output, learning necessitates performing inference over latent variables conditioned on the output, which can be intractable no matter how simple the model family is. Even finding a single latent variable setting that satisfies the constraints could be difficult; for instance, the observed ou...
Relaxed-plan-based (RPB) heuristics were first proposed by Hoffmann and Nebel for their FF system and are still used by current top-performing planners. Their main characteristic is that they are computed by computing a so-called relaxed plan, which is a plan for a relaxed version of the problem that ignores negative effects of actions. However, still in some domains that humans consider simple...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید