نتایج جستجو برای: state mode transition probability matrix tpm

تعداد نتایج: 1757360  

2016
Mark Kempton

We study the mixing rate of non-backtracking random walks on graphs by looking at non-backtracking walks as walks on the directed edges of a graph. A result known as Ihara’s Theorem relates the adjacency matrix of a graph to a matrix related to non-backtracking walks on the directed edges. We prove a weighted version of Ihara’s Theorem which relates the transition probability matrix of a non-ba...

Journal: :SIAM J. Matrix Analysis Applications 2011
Arno Berger Theodore P. Hill Bahar Kaynar Ad Ridder

A sequence ofreal numbers (xn) is Benford if the significands, i.e., the fraction parts in the floating-point representation of (x ), are distributed logarithmically. Similarly, a discrete-time irreducible and aperiodic fi­ n nite-state Markov chain with transition probability matrix P and limiting matrix P' is Benford if every com­ ponent of both sequences of matrices (pn P') and (pn+1 pn) is ...

1998
Yi Sun Lang Tong

In this paper, the base-band signal collected ~rom an unknown, multipath, multi-receiver FIR channel IS viewed as a state sequence generated by a hidden Markov model (HMM) whose states and order are unknown and whose transition probability matrix with an unknown permutation is known once the order is given. Based on this view, two types of algorithms are developed ,for acquisit~on and tracking,...

پایان نامه :وزارت علوم، تحقیقات و فناوری - پژوهشگاه ملی مهندسی ژنتیک وزیست فناوری 1380

مولکول tnra یک تنظیم کننده گلوبال است که به قابلیت در دسترس بودن منابع نیتروژن پاسخ داده و نقش تنظیمی بر روی نسخه برداری از ژنها در حضور نیتروژن دارد. ملکول scoc یک پروتئین متصل شونده به dna است که تنظیم کننده شروع اسپورزایی و تولید پروتئاز قلیائی (apre) است. برای بررسی ارتباط این دو عامل نسخه برداری با تولید پروتئاز قلیائی، ژن های tnra،scoc و apre باکتری بومی b. clausii ehy l2 تکثیر، کلون و تع...

Journal: :journal of mathematical modeling 2014
gholam hassan shirdel mohsen abdolhosseinzadeh

the probable lack of some arcs and nodes in the stochastic networks is considered in this paper, and its effect is shown as the arrival probability from a given source node to a given sink node. a discrete time markov chain with an absorbing state is established in a directed acyclic network. then, the probability of transition from the initial state to the absorbing state is computed. it is as...

1999
K V Kheruntsyan

We consider the quantum model of a driven anharmonic oscillator, in the presence of dissipation, and present an exact analytic solution for the corresponding Wigner function in the steady-state regime. This provides explicit phase-space images of the resulting state of the cavity mode, and allows us to understand how the quantum interference is built up into it. The photon number probability di...

1995
Zhenyi Jin

This paper introduces an algorithm and a new graph, the Conditioned Transition Graph (CTG), to derive the mode invariants from an Software Cost Reduction (SCR) mode transition table. An SCR requirements document contains a complete description of the external behavior of the software system. Some system properties, such as mode invariants, can be used to describe safety features that must be en...

2014
Alex H. Lang Charles K. Fisher Thierry Mora Pankaj Mehta

Here we provide more details of the results in the main text. First, we outline our notation. The time dependent probability of state i is pi = pi(t), while the steady state probability of state i is p ss i . The Laplace transformed probability of state i is Pi(s). The rate to go from state i to state j is kij . The probability to transition from state i to state j is qij . The time it takes to...

Journal: :Land 2023

The analysis and modeling of spatial temporal changes in land use can reveal changing urban patterns trends. In this paper, we introduce a linear transformation optimization Markov (LTOM) model that be exploited to estimate the state transition probability matrix use, building loosely coupled ANN-CA-LTOM for simulating predicting changes. advantages are it is flexible high expansibility; mainta...

2007
T. IKI M. HORIGUCHI M. YASUDA M. KURANO

Abstract. Based on temporal difference method in neuro-dynamic programming, an adaptive policy for finite state Markov decision processes with the average reward is constructed under the minorization condition. We estimate the value function by a learning iteration algorithm. And the adaptive policy is specified as an ε-forced modification of the greedy policy for the estimated value and the es...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید