نتایج جستجو برای: markov chain

تعداد نتایج: 336523  

2010
RYAN WANG

This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with the basic theory of Markov chains and build up to a theorem that characterizes convergent chains. We then discuss the MetropolisHastings algorithm.

2003
RAGNAR NORBERG

We consider a financial market driven by a continuous time homogeneous Markov chain. Conditions for absence of arbitrage and for completeness are spelled out, non-arbitrage pricing of derivatives is discussed, and details are worked out for some cases. Closed form expressions are obtained for interest rate derivatives. Computations typically amount to solving a set of first order partial differ...

1999
Radford M. Neal

I show how to run an N-time-step Markov chain simulation in a circular fashion, so that the state at time 0 follows the state at time N ?1 in the same way as states at times t follow those at times t?1 for 0 < t < N. This wrap-around of the chain is achieved using a coupling procedure, and produces states that all have close to the equilibrium distribution of the Markov chain, under the assumpt...

2015
Eric P. Xing Scribes Heran Lin Bin Deng Yun Huang

which decreases as J gets larger. So the approximation will be more accurate as we obtain more samples. Here is an example of using Monte Carlo methods to integrate away weights in Bayesian neural networks. Let y(x) = f(x,w) for response y and input x, and let p(w) be the prior over the weights w. The posterior distribution of w given the data D is p(w|D) ∝ p(D|w)p(w) where p(D|w) is the likeli...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه الزهراء - دانشکده علوم پایه 1387

چکیده ندارد.

Journal: :iranian journal of economic studies 2012
ali mohammadi ahmad rajabi

abstract in this paper, markov chain and dynamic programming were used to represent a suitable pattern for tax relief and tax evasion decrease based on tax earnings in iran from 2005 to 2009. results, by applying this model, showed that tax evasion were 6714 billion rials**. with 4% relief to tax payers and by calculating present value of the received tax, it was reduced to 3108 billion rials. ...

Journal: :journal of mathematical modeling 2014
gholam hassan shirdel mohsen abdolhosseinzadeh

the probable lack of some arcs and nodes in the stochastic networks is considered in this paper, and its effect is shown as the arrival probability from a given source node to a given sink node. a discrete time markov chain with an absorbing state is established in a directed acyclic network. then, the probability of transition from the initial state to the absorbing state is computed. it is as...

Journal: :journal of biostatistics and epidemiology 0
maryam zamani department of biostatistics, school of health, kerman university of medical sciences, kerman, iran. abbas bahrampour department of biostatistics and epidemiology, school of health kerman, research center for modeling in health kerman university of medical sciences, iran. nouzar nakhaee department of neurology, neuroscience research center, kerman university of medical sciences, kerman, iran.

background & aim: chronic diseases impact not only on patients but also on their family members’ lives. this study aims to determine dimensions of family dermatology life quality index (fdlqi) questionnaire by the use of classic and bayesian factor analysis (bfa) factor. methods & materials: in this study, fdlqi questionnaire distributed among 100 family members of dermatological patients. bfa ...

1993
Jeerey Horn San Mateo Morgan Kaufman

Finite, discrete-time Markov chain models of genetic algorithms have been used successfully in the past to understand the complex dynamics of a simple GA. Markov chains can exactly model the GA by accounting for all of the stochasticity introduced by various GA operators, such as initialization, selection, crossover, and mutation. Although such models quickly become unwieldy with increasing pop...

2006
GIACOMO ALETTI

Given a strongly stationary Markov chain and a finite set of stopping rules, we prove the existence of a polynomial algorithm which projects the Markov chain onto a minimal Markov chain without redundant information. Markov complexity is hence defined and tested on some classical problems.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید