Time-Situated Agency: Active Logic and Intention Formation

نویسندگان

  • Michael L. Anderson
  • Darsana P. Josyula
  • Yoshi A. Okamoto
  • Don Perlis
چکیده

In recent years, embodied cognitive agents have become a central research focus in Cognitive Science. We suggest that there are at least three aspects of embodiment| physical, social and temporal|which must be treated simultaneously to make possible a realistic implementation of agency. In this paper we detail the ways in which attention to the temporal embodiment of a cognitive agent (perhaps the most neglected aspect of embodiment) can enhance the ability of an agent to act in the world, both in itself, and also by supporting more robust integrations with the physical and social worlds. 1 Three Aspects of Embodiment in Cognitive Science Any implemented, interactive system is embedded in the world in at least three ways: temporally, physically and socially. Its processes take time and unfold within time; likewise, the system takes up space by (at least) being instantiated in and utilizing components with discreet and limited storage locations, and perhaps by being tightly and exclusively bound to a particular physical object (a robot being the most salient example); nally, insofar as it can at least receive commands and/or display results, it exists within and is oriented toward an interpretive, social world, a world not of objects but of other agents. It is well known by now that work in Cognitive Science in general, and in the new robotics movement in particular, has been rethinking the signi cance of the physical embodiment of cognitive systems: the physical space in which a system exists, and the physical object in which it is instantiated, are not merely limitations to work within, but represent a resource to exploit. Thus a great deal of attention has been paid to systems which are aware of, prepared to utilize, to treat as relevant, take advantage of, be a ected by, and even change their physical environment in support of its goals and processes. We believe that the same kind of re-thinking must take place for the temporal and social embeddedness of cognitive systems. That reasoning takes place in time is not an obstacle the impact of which is to be minimized, but is a fact which can be recognized and utilized by a reasoning agent to improve its ability to think and act. Likewise, the necessity of interacting with the social world should not be thought of as an undesirable di culty, but This research was supported in part by the AFOSR and ONR These are not the only important aspects of embodiment, nor even the only ones relevant to agency. Emotion and mood come immediately to mind as further aspects of embodied agents which must be considered for any fully realistic model of human agency. See [45, 13] Indeed, we believe attention paid to these issues can be used to improve interactive systems more generally. See, e.g. [9, 10, 11, 31, 32, 40, 49] as an opportunity to access and utilize resources which can dramatically improve a system's power and performance. In general, as Ismail and Shapiro succinctly state, the goal is to \use reasoning in the service of acting, and acting in the service of reasoning" [30]. We believe that this insight, which has driven so much interesting work in the physical embodiment of computational systems, can be fruitfully applied as well to both their temporal and social embodiment. Through our own focus on temporal integration, we are trying to move forward on all three fronts simultaneously. We believe, for a system which reasons in time, that the temporal aspects of its reasoning are salient (a belief may have been held in the past, but no longer; a train of thought may be pursued for a long time without results; a decision may need to be made in light of an approaching deadline) and that therefore any such system should be made aware of the passage of time as its reasoning proceeds. Our work with active logics (section 4) demonstrates that greater temporal integration along these lines can improve reasoning systems, and in particular that these improvements can enhance the ability of a reasoning system to integrate with changing real-world environments, as well as to better understand and integrate with human users through natural language. This is the general case: the current essay highlights the importance of these types of integration to the formulation and implementation of intentions in the case of an autonomous agent acting in the world, and we direct the reader to other published work detailing other applications of temporally situated reasoning, as in the more restricted case of a servitor which must understand and implement a user's command. 2 Some Aspects of Agency These three aspects of embodiment are especially salient for agency. Roughly speaking, to intend an e ect it is necessary to be able to identify an object, know its current state, and imagine (or otherwise represent) an alternative. To be able to intend e ectively it must also be possible to map a path from the current to the intended state, and be in a position to implement the required changes; more than this, the integration between the planning/implementation system and the intentional system must be such that information about the possibility of an intention can be taken into account in the intention-forming process, and information about an agent's progress toward a goal can be taken into account in further (or ongoing) planning aimed at reaching the goal (e ecting an intention). From its root, then, intending is bound up with temporal considerations, for it is of necessity future-directed, depending on a recognized di erence between the known present, and a desired future; further, planning has a temporal structure, for some things must be done In [30] Haythem Ismail and Stuart Shapiro suggest a similar (but less broadly stated) philosophy to guide work on embodied cognitive agents. The discussion of agency is in uenced throughout by [6, 16, 8]. There are many senses of the word \intention"; we are most interested in that used in connection with those explicitly represented and consciously available goals which play a central role in acting (and planning to act) in the world. It is worth distinguishing this in particular from a sense of \intended" more closely bound to the notion of voluntary. See [6] for an excellent discussion. rst, and others last, some now and others later, and there may be groups of things that, while they must be done sequentially with respect to each other, are as a group temporally and conditionally independent of the other parts of a complex action. This feature of agency is sometimes called a plan hierarchy, and it might be thought that the relations can be characterized in a time-independent way, in terms of conditional dependencies and prerequisites. Even where such re-description is possible, however, it seems that it would submerge real features of human agency and planning, for in designing and implementing a complex plan, it will be important to know what sub-plans will take longer, (and so must be started now, despite conditional independence from other parts of a plan), which can be done simultaneously, and which would be best to nish simultaneously. That is, planning seems to involve considerations of timing which cannot be expressed purely in terms of functional dependence. Obviously, agency also requires awareness of environment, and of the agent's own situation in that environment; and in many cases agency is also importantly bound up with social awareness, for an intention can originate in a request or command, or it could require the cooperation of other agents. In a more complicated case an intention could have ethical dimensions which would play a role in determining the methods whereby (or even whether) it was e ected. The intentional system also has its own internal structure or hierarchy which is important to its nature. There are at least two axes to this structure, one temporal and one directional. The temporal axis is structured by the di erence between short term (proximal) and long term (distal) intentions. We might get at the di erence between proximal and distal intentions by saying: if intention Q could be provided by an agent in justi cation or explanation of the action P, and if action P, once completed, does not ful ll the intention Q (if it remains rational to intend Q), then Q is a distal intention. Thus, the intention Q (so Charles will break his leg) provided in explanation of action P (putting a roller skate in the middle of the hall) is distal. In contrast, going upstairs to get the camera represents a proximal intention. An action can be distally related to an intention for two di erent reasons: rst, it can be reasonably explained by Q only in light of a prediction R (that Charles will come through the door and step on the skate, etc.); second, it can be one action in a planned series of actions, such that the series S, but not the action P, will result in the ful llment of Q. Another way to get at the distinction is in terms of whether a given intention seems to require distinct other intentions for its ful llment. Getting the camera, despite the fact that doing this requires going up the stairs, into the bedroom, grabbing the camera (and that these actions would be intentional in the sense of voluntary, mentioned above), doesn't seem to require further explicit intentions or predictions. On the other side of the coin, \writing my dissertation", o ered in explanation of typing away on my computer, looks to express a distal intention; there is no single, coherent action, unguided by further explicit intentions, which constitutes \writing one's dissertation". This assumes an obvious distinction between predictions, which are not required, and expectations (that the camera is upstairs), which are. There is a complication here, because actions themselves can be more and less narrowly de ned; there will almost always be a way to construe P so as to make it distally related to any given Q. This is true, but perhaps not telling, for there will generally be a way to construe Q so as to make it proximal with respect to P. This will be the case whenever there is a single coherent (not necessarily simple) action which constitutes \the doing of Q". It is not important that every case be clear; the image of a structural axis suggests a continuum, where some intentions are clearly proximal, and others clearly distal. However, that there is such a structure to the intentional system is important, and it is one of the insights which lies behind Michael Bratman's notion of a planning agent[8], which we might describe as an agent which is capable of analyzing distal intentions into proximal ones, and proximal intentions into actions. Bratman describes planning agency this way: Our purposive activity is typically embedded in multiple, interwoven quilts of partial, future-directed plans of action. We settle in advance on such plans of action, ll them in, adjust them, and follow through with them as time goes by. We thereby support complex forms of organization in our own, temporally extended lives and in our interactions with others; and we do this in ways that are sensitive to the limits on our cognitive resources.([8] p.1) In addition to the welcome focus on the temporal aspects of agency, the central upshot of this picture of agency is that committing one's self to a (distal) intention is more like driving a car to a destination than throwing a ball at a target|one must continually observe one's progress to the adopted end, and use these observations to make decisions and adjustments to best guide one's actions. Further, it is important to monitor the world and our e ects on it not just in the service of guiding current intentions to their ful llment, but also in the service of maintaining an accurate self-conception. It seems clear that an important part of our ability to intend, to choose means, and to guide actions to their ends is bound up with an accurate assessment of our particular abilities and capacities as practical agents. Just as is the case with knowledge of the world, this self-knowledge must be constantly monitored for accuracy. Thus, our ability to maintain an e ective practical agency requires not just reasoning about actions in light of the world, but reasoning about ourselves (and our beliefs about ourselves) in light of the success or failure of our actions; in other words, agency requires not just reasoning, but introspection and meta-reasoning as well. This brings us to the directional axis of the intentional system, which concerns the object at which the intention aims: there are internally directed intentions, which only e ect other intentions, and externally directed ones, aimed at the world. An example of the rst sort might be an intention about what sort of agent (person) to be [22]; such an intention needn't be anything so grand as wanting to be a moral saint, but can involve more mundane matters like a desire for e ciency. A desire to be an e cient person will gure in decisions about which of many possible paths to follow to a given end; in a more complex case it could gure even in choosing which strategy of reasoning to use to decide between competing paths (see It is worth emphasizing what is only mentioned in the quote, but is centrally important to Bratman's theory of agency|that we are not just individually planning agents, but cooperative ones. We not only plan complex projects in coordination with one another, but even in the pursuit of individual intentions consider ways in which we might secure each other's cooperation. [14] for a brief discussion). This points to another kind of metareasoning required of a fully-speci ed autonomous agent. 3 Agency and Uncertainty Intention formation|indeed, planned, directed action more generally|requires continual observation, cooperation and planning; in addition a robustly speci ed agent seems to require introspection and meta-reasoning. As has been mentioned already, a good deal of the current work most concerned with robust physical/environmental integration adopts a highly reactive model of agency, designed to produce complex behaviors without detailed internal representations; the eld in general is re-thinking the more traditional \symbolic processing" approach to modeling and producing intelligent behavior.[4, 2] However, in so far as the above analysis is correct, agency requires not just continual observation of and reactive adjustments to the physical environment, but also introspective observation and the ability to engage in meta-reasoning about intentions and capacities, making appropriate internal adjustments. This suggests that a more complete agent must be both reactive and deliberative; yet these are generally considered antithetical goals. One (but certainly not the only) way in which this tension can be brought out is by considering the problem of reasoning under uncertainty. The real world is complex, dynamic, and not completely knowable. What is true now may not have been true before, or for much longer, and more can always be discovered. Any system that purports to model the world (or any part of it) must be able to accommodate such changes. Given these conditions, any reasoning about the world is provisional and uncertain, because changes in what is true of, or known about the world can require revisions to simple beliefs, derived conclusions, generalizations, and even heuristics for thinking. Systems that hope to accomplish this latter task must be exible enough to recognize and gracefully handle these situations, and recover from the contradictions, inconsistencies, and irregularities that they involve. Much the same can be said about integration with the social world. Focusing just on linguistic interaction (which must be considered central to social integration), complexity and uncertainty seem the salient characteristics. Conversation is not generally the exchange of fully formed, grammatically correct, and error-free utterances. Indeed, it is unlikely that there could ever be a fully uent, error-free dialog; even putting aside problems of signal reception, and assuming perfect syntactical processing, the ability to understand a dialog partner involves such complicated and uncertain tasks as modeling their knowledge state and using context to disambiguate reference. As in the case of observing the physical environment, one must be prepared to retract conclusions, and engage in repairs of one's beliefs as well as of the dialog itself, as the conversation continues and more evidence comes This latter task points already to ways in which social and physical integration are intertwined, and suggests the sorts of reasons one might give for moving forward with these two aspects of embodiment simultaneously: for disambiguating a reference (\I guess he's had enough.") can require not just attention to the dialog context (in which \he" might refer to the current subject of conversation), but also to the physical context of the dialog (in which \he" might be taken to refer to the fellow who just fell o the barstool).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

State-of-the-art of intention recognition and its use in decision making

Intention recognition is the process of becoming aware of the intentions of other agents, inferring them through observed actions or effects on the environment. Intention recognition enables pro-activeness, in cooperating or promoting cooperation, and in pre-empting danger. Technically, intention recognition can be performed incrementally as you go along, which amounts to learning. Intention re...

متن کامل

Defeasible Logic: Agency, Intention and Obligation

We propose a computationally oriented non-monotonic multi-modal logic arising from the combination of agency, intention and obligation. We argue about the defeasible nature of these notions and then we show how to represent and reason with them in the setting of defeasible logic.

متن کامل

مسأله حضور در فضا: آگاهی و عاملیت فضایی با تاکید بر فضای عمومی شهری

Public space is the realm of the concrete and substantial presence of the different social groups with different behavior patterns. The concept of space in this sense is an entity that, by the people and through individual and collective action and social relations are formed. The presence of people in the space-in a way that is free from domination, could Strength the urban life. this paper, b...

متن کامل

Procedural Reasoning System Teleo-reactive Agents 7 Plans and Agenda 6 Intentions

A situated agent is a computer-basedsystem that is embedded in a realtime world or environment, is ascribed with some mental states, and may enjoy a disconcerting variety of properties such as proactivity, reactivity, etc. In this paper we present the design of a class of knowledge-based situated agents and an agent specification language, called SICLE . After the SICLE interpreter receives a S...

متن کامل

THÈSE Pour l’obtention du grade de DOCTEUR D’UNIVERSITÉ Discipline: INFORMATIQUE

The goal of this thesis is to study the issue of rational BDI learning agents, situated in a multi-agent system. A rational agent can be defined as a cognitive entity endowed with intentional attitudes, e.g., beliefs, desires, and intentions (BDI). First, we study the concepts of agency and practical reasoning, allowing agents to induce from their intentional attitudes, a behavior identified as...

متن کامل

Inferring agency from sound.

In three experiments we investigated how people determine whether or not they are in control of sounds they hear. The sounds were either triggered by participants' taps or controlled by a computer. The task was to distinguish between self-control and external control during active tapping, and during passive listening to a playback of the sounds recorded during the active condition. Experiment ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002