منابع مشابه
On Simultaneous Linearization of Diffeomorphisms of the Sphere
Let R1, R2 . . . Rm be rotations generating SOd+1, d ≥ 2, and f1, f2 . . . fm be their small smooth perturbations. We show that {fα} can be simultaneously linearized if and only if the associated random walk has zero Lyapunov exponents. As a consequence we obtain stable ergodicity of actions of random rotations in even dimensions. 1. Main results. Let f1, f2 . . . fm be diffeomorphisms of S , d...
متن کاملSimultaneous Localization And Mapping Without Linearization
We apply a combination of linear time varying (LTV) Kalman filtering and nonlinear contraction tools to the problem of simultaneous mapping and localization (SLAM), in a fashion which avoids linearized approximations altogether. By exploiting virtual synthetic measurements, the LTV Kalman observer avoids errors and approximations brought by the linearization process in the EKF SLAM. Furthermore...
متن کاملA proof of simultaneous linearization with a polylog estimate
f(w) = w (1 + Aw +O(w)) where A 6= 0 and m ∈ N. By taking a linear coordinate change w 7→ Aw, we may assume that A = 1. In the theory of complex dynamics such a germ appears when we consider iteration of local dynamics near the parabolic periodic points, and plays very important roles. (See [Mi] and [Sh] for example.) Now we consider a perturbation fǫ → f of the form fǫ(w) = Λǫw (1 + w m +O(w))...
متن کاملOn Nonregular Feedback Linearization
This paper investigates the use of nonregular (not necessarily regular) static/dynamic state feedbacks to achieve feedback linearization of affine nonlinear systems. First, we provide an example which is nonregular static feedback linearizable but is not regular dynamic feedback linearizable. Then, we present some necessary conditions as well as sufficient conditions for nonregular feedback lin...
متن کاملLearning algorithms based on linearization.
The aim of this article is to investigate a mechanical description of learning. A framework for local and simple learning algorithms based on interpreting a neural network as a set of configuration constraints is proposed. For any architectural design and learning task, unsupervised and supervised algorithms can be derived, optionally using unconstrained and hidden neurons. Unlike algorithms ba...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Aequationes mathematicae
سال: 2019
ISSN: 0001-9054,1420-8903
DOI: 10.1007/s00010-019-00643-y