نتایج جستجو برای: markov
تعداد نتایج: 71738 فیلتر نتایج به سال:
رودخانه¬ها و حاشیه مرطوب آنها جزء مهمی از یک حوضه آبخیز هستند که می¬توانند کنترل مهمی روی شرایط فیزیکی و اکولوژیکی مناطق حوضه های پائین دست همچون کنترل فرسایش و کاهش تولید رسوب ایجاد کنند. همچنین باعث بهبود کیفیت آب رودخانه، مکانی مناسب برای حیات وحش و آبزیان، تولید کننده علوفه دامی، تغذیه سفره های آب زیر زمینی، متعادل نگه داشتن درجه حرارت و تنظیم رشد گیاهان آبزی و نهایتاً بانک زنده تنوع زیستی م...
b a c k g r o u n d & aim: the aim of the current study was to investigate the advantages of bayesian method in comparison to traditional methods to detect best antioxidant in freezing of human male gametes. methods & materials: semen samples were obtained from 40 men whose sperm had normal criteria. a part of each sample was separated without antioxidant as fresh and the remaini...
we propose to use a mathematical method based on stochastic comparisons of markov chains in order to derive performance indice bounds. the main goal of this paper is to investigate various monotonicity properties of a single server retrial queue with first-come-first-served (fcfs) orbit and general retrial times using the stochastic ordering techniques.
landsat data for 1992, 2000, and 2013 land use changes for ekbatan dam watershed was simulated through ca-markov” model. two classification methods were initially used, viz. the maximum likelihood (mal) and support vector machine (svm). although both methods showed high overall accuracy and kappa coefficient, visually mal failed in separating land uses, particularly built up and dry lands.there...
The rapidly growing encrypted traffic hides a large number of malicious behaviours. difficulty collecting and labelling makes the class distribution dataset seriously imbalanced, which leads to poor generalisation ability classification model. To solve this problem, new representation learning method in its diversity enhancement model are proposed, uses images represent samples. First, is trans...
One of the basic facts known for discrete-time Markov decision processes is that, if probability distribution an initial state fixed, then every policy it easy to construct a (randomized) with same marginal distributions state-action pairs as original policy. This equality implies that values major objective criteria, including expected discounted total costs and average rewards per unit time, ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید