نتایج جستجو برای: s policy

تعداد نتایج: 960898  

2013
Mardi Gras

BEIRUT, Lebanon President Amin Gemayel’s government scrap­ ped Lebanon’s troop withdrawal pact with Israel yesterday as part of a deal with Syria designed to end the Lebanese civil war. Gemayel held an emergency ses­ sion of his Council o f Ministers to announce abrogation of the U.S.mediated pact signed May 17. “The council has decided to cancel this. . . accord, consider it null and void and ...

2009
Tomonori Ishigaki Katsushige Sawaki

In this paper, we consider a dynamic stochastic inventory model with fixed inventory holding and shortage costs in addition to a fixed ordering cost. WE discuss a sufficient and necessary condition for an (s,S) policy to be optimal in the class of such stochastic inventory models. Furthermore, we explore how such a sufficient and necessary condition can be rewritten when the demand distribution...

2010
Kimitoshi Sato Katsushige Sawaki

Not only the amount of product demanded, but also the price of the product have a strong impact on a manufacturer’s revenue. In this paper we consider a continuous-time inventory model where the spot price of the product stochastically fluctuates according to a Brownian motion. Should information on the spot price be available, the manufacturer would wish to buy the product on the spot market w...

2004
Alain Bensoussan

Abstract: We prove that an (S, S) policy is optimal in a continuous-review stochastic inventory model with a fixed ordering cost when the demand is (i) a diffusion process and a compound Poisson process with exponentially distributed jump sizes, and (ii) a mixture of a constant demand and a compound Poisson process. The proof uses the theory of impulse control. The Bellman equation of dynamic p...

2005
Maria José S. Salgado Márcio G. P. Garcia Marcelo C. Medeiros

This paper uses a Threshold Autoregressive (TAR) model with exogenous variables to explain a change in regime in Brazilian nominal interest rates. By using an indicator of currency crises the model tries to explain the difference in the dynamics of nominal interest rates during and out of a currency crises. The paper then compares the performance of the nonlinear model to a modified Taylor Rule...

Journal: :JAMDS 2001
Omar Ben-Ayed

Operations Research techniques are usually presented as distinct models. Difficult as it may often be, achieving linkage between these models could reveal their interdependency and make them easier for the user to understand. In this article three different models, namely Markov Chain, Dynamic Programming, and Markov Sequential Decision Processes, are used to solve an inventory problem based on...

Journal: :Operations Research 1991
Yu-Sheng Zheng Awi Federgruen

In this paper, a new algorithm for computing optimal (s, S) policies is derived based upon a number of new properties of the infinite horizon cost function c(s, S) as well as a new upper bound for optimal order-up-to levels S* and a new lower bound for optimal reorder levels s*. The algorithm is simple and easy to understand. Its computational complexity is only 2.4 times that required to evalu...

2012
Elizabeth R. Peterson

Elizabeth R. Peterson Department of Psychology, University of Auckland Christine M. Rubie-Davies Faculty of Education, University of Auckland Margaret J. Elley-Brown Centre for Child and Family Policy Research, University of Auckland Deborah A. Widdowson Centre for Child and Family Policy Research, University of Auckland Robyn S. Dixon Centre for Child and Family Policy Research, University of ...

2011
Thomas G. Blomberg Carly Knight Abigail A. Fagan Daniel P. Mears

EDITORIAL INTRODUCTION. From mass incarceration to targeted policing: Introduction to the Special Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Richard Rosenfeld EXECUTIVE SUMMARY. Overview of “Imprisonment and crime: Can both be reduced?” . . . . . . 9 Steven N. Durlauf, Daniel S. Nagin RESEARCH ARTICLE. Impr...

2013
Petar Kormushev Darwin G. Caldwell

In Reinforcement Learning (RL) the goal is to find a policy π that maximizes the expected future return, calculated based on a scalar reward function R(·) ∈ R. The policy π determines what actions will be performed by the RL agent. Traditionally, the RL problem is formulated in terms of a Markov Decision Process (MDP) or a Partially Observable MDP (POMDP). In this formulation, the policy π is v...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید