Temporally Autonomous Agent Interaction
نویسندگان
چکیده
In this paper, previous simulations (Conover & Trajkovski, 2007; Conover, 2008b) involving passively interacting temporally autonomous agents are expanded to accommodate active agents which directly communicate — albeit in a primitive manner. Throughout the sections of this paper, several agent activation models are explored. In each model, agents exchange “beliefs” via simple messages which are reflective of an agent’s internal state. Loosely speaking, it is the goal of any given agent to “convince” neighboring agents to adopt the given agents currently held belief (internal state). Though agents may take on many states during a simulation, each agent communicates only its currently active state only with its spatially embedded neighbors. ABSTRACT
منابع مشابه
A Simulation of Temporally Variant Agent Interaction via Belief Promulgation
This chapter concludes a two part series which examines the emergent properties of multi-agent communication in “temporally asynchronous” environments. Many traditional agent and swarm simulation environments divide time into discrete “ticks” where all entity behavior is synchronized to a master “world clock”. In other words, all agent behavior is governed by a single timer where all agents act...
متن کاملInherent Value Systems for Autonomous Mental Development
The inherent value system of a developmental agent enables autonomous mental development to take place right after the agent’s “birth.” Biologically, it is not clear what basic components constitute a value system. In the computational model introduced here, we propose that inherent value systems should have at least three basic components: punishment, reward and novelty with decreasing weights...
متن کاملAgent Interaction via Message-Based Belief Communication
This work reflects continued research into “temporally autonomous” multi-agent interaction. Many traditional approaches to modeling multi-agent systems involve synchronizing all agent activity in simulated environments to a single “universal’’ clock. In other words, agent behavior is regulated by a global timer where all agents act and interact deterministically in time. However, if the objecti...
متن کاملSymbol emergence by combining a reinforcement learning schema model with asymmetric synaptic plasticity
A novel integrative learning architecture, RLSM with a STDP network is described. This architecture models symbol emergence in an autonomous agent engaged in reinforcement learning tasks. The architecture consists of two constitutional learning architectures: a reinforcement learning schema model (RLSM) and a spike timing-dependent plasticity (STDP) network. RLSM is an incremental modular reinf...
متن کاملIncremental acquisition of behaviors and signs based on a reinforcement learning schemata model and a spike timing-dependent plasticity network
A novel integrative learning architecture based on a reinforcement learning schemata model (RLSM) with a spike timing-dependent plasticity (STDP) network is described. This architecture models operant conditioning with discriminative stimuli in an autonomous agent engaged in multiple reinforcement learning tasks. The architecture consists of two constitutional learning architectures: RLSM and S...
متن کامل