نتایج جستجو برای: mdp
تعداد نتایج: 3240 فیلتر نتایج به سال:
Membrane dipeptidase (MDP; EC 3.4.13.19) enzymic activity that was inhibited by cilastatin has been detected on the surface of 3T3-L1 cells. On differentiation of the cells from fibroblasts to adipocytes the activity of MDP increased 12-fold. Immunoelectrophoretic blot analysis indicated that on adipogenesis the increase in the amount of MDP preceded the appearance of GLUT-4. MDP on 3T3-L1 adip...
Samuel P. M. Choi Dit-Yan Yeung Nevin L. Zhang [email protected] [email protected] [email protected] Department of Computer Science, Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Abstract Traditional reinforcement learning (RL) assumes that environment dynamics do not change over time (i.e., stationary). This assumption, however, is not realistic in many real-...
The cell surface component CD14 and the toll-like receptors 2 and 4 (TLR2 and TLR4) are important in mediating the immune responses to bacterial products in mammals. Using mice genetically deficient in CD14, TLR2, or TLR4, we studied the role of these molecules in the anorectic effects of LPS and muramyl dipeptide (MDP). CD14 or TLR2 knockout (KO) and TLR4-deficient (TLR4-DEF) mice as well as c...
We develop an exhaustive study of Markov decision process (MDP) under mean field interaction both on states and actions in the presence common noise, when optimization is performed over open-loop controls infinite horizon. Such model, called CMKV-MDP for conditional McKean–Vlasov MDP, arises obtained here rigorously with a rate convergence as asymptotic problem N-cooperative agents controlled b...
We study the use of inverse reinforcement learning (IRL) as a tool for recognition of agents on the basis of observation of their sequential decision behavior. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of an agent in terms of forward planning for the MDP. The reality of the agent’s decision problem and process may not be expresse...
Partially Observable Markov Decision Processes (POMDPs) offer an elegant framework to model sequential decision making in uncertain environments. Solving POMDPs online is an active area of research and given the size of real-world problems approximate solvers are used. Recently, a few approaches have been suggested for solving POMDPs by using MDP solvers in conjunction with imitation learning. ...
In this paper an implementation of a Blackjack agent is discussed. The agent uses a Markov decision process (MDP) to learn about the game world of Blackjack and exploits its knowledge to play successfully. Value iteration and q-learning are used, allowing the agent to propagate its knowledge back to every state from the terminal states. Feature extraction is used to speed up this process, as th...
The discovery of quantitative trait loci (QTL) in model organisms has relied heavily on the ability to perform controlled breeding to generate genotypic and phenotypic diversity. Recently, we and others have demonstrated the use of an existing set of diverse inbred mice (referred to here as the mouse diversity panel, MDP) as a QTL mapping population. The use of the MDP population has many advan...
The purpose of this study was to investigate the effect of a 4-MET- and 10-MDP-based primer on the bond strength of two resin cements (SuperBond C&B, Sun Medical; Panavia Fluoro Cement, Kuraray) to titanium (Ti). Ti plates were treated with six experimental primers consisting of, respectively, 10-MDP and 4-MET in concentrations of 0.1, 1 and 10wt%, or were kept untreated (control). The highest ...
Activation of peritoneal macrophages from guinea pigs by various bacterial cell walls, M-1 endo-N-acetylmuramidase enzymatically digested bacterial cell walls and synthetic muramyl dipeptides was studied in terms of stimulation of [14C] glucosamine incorporation. All test bacterial cell wall preparations significantly increased a [14C]glucosamine uptake by the macrophages. Some of the water-sol...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید