Proceedings of the 6 th Nordic Workshop on Programming Theory 17 – 19 October 1994 , Aarhus , Denmark

نویسندگان

  • Uffe H. Engberg
  • Kim G. Larsen
  • Peter D. Mosses
  • H. Engberg
  • Matthew Hennessy
  • Sigurd Meldal
  • Bengt Jonsson
  • Janne K. Damgaard
چکیده

ion of the non-negative natural numbers. The correctness of the com-plete analysis then follows from the subject reduction result of [13] that allowsus to lift safety (as opposed to liveness) results from the behaviours to safetyresults for CML programs.We also address the implementation of the second stage of the analysis.Here the idea is to transform the problem as speci ed by the syntax-directedinference system into a syntax-free equation solving problem where standardtechniques from data ow analysis can be used to obtain fast implementations.(As already mentioned the implementation of the rst stage is the topic of[14, 1].)Comparison with other work. First we want to stress that our approach toprocessor allocation is that of static program analysis rather than, say, heuris-tics based on pro ling as is often found in the literature on implementation ofconcurrent languages.In the literature there are only few program analyses for combined func-tional and concurrent languages. An extension of SML with Linda commu-nication primitives is studied in [3] and, based on the corresponding processalgebra, an analysis is presented that provides useful information for the place-ment of processes on a nite number of processors. A functional language withcommunication via shared variables is studied in [9] and its communicationpatterns are analysed, again with the goal of producing useful information forprocessor (and storage) allocation. Also a couple of program analyses havebeen developed for concurrent languages with an imperative facet. The papers[4, 8, 15] all present reachability analyses for concurrent programs with a stati-cally determined communication topology; only [15] shows how this restrictioncan be lifted to allow communication in the style of the -calculus. Finally,[11] presents an analysis determining the number of communications on eachchannel connecting two processes in a CSP-like language.As mentioned our analysis is speci ed in two stages. The rst stage is for-malised in [13, 14]; similar considerations were carried out by Havelund andLarsen leading to a comparable process algebra [6] but with no formal studyof the link to CML nor with any algorithm for automatically extracting be-haviours. The same overall idea is present in [3] but again with no formalstudy of the link between the process algebra and the programming language.The second stage of the analysis extracts much more detailed informationfrom the behaviours and this leads to a much more complex notion of correctnessthan in [13]. Furthermore, the analysis is parameterised on the choice of valuespace thereby incorporating ideas from abstract interpretation.2 BehavioursFull details of the syntax of CML are not necessary for the developments of thepresent paper. It will su ce to introduce a running example and to use it tomotivate the process algebra of CML.272 Example 2.1 Suppose we want to de ne a program pipe [f1,f2,f3] in outthat constructs a pipeline of processes: the sequence of inputs is taken over chan-nel in, the sequence of outputs is produced over channel out and the functionsf1, f2, f3 (and the identity function id de ned by fn x => x) are appliedin turn. To achieve concurrency we want separate processes for each of thefunctions f1, f2, f3 (and id). This system might be depicted graphically asfollows:f1? f2? f3? id? -inch1ch2ch3outfailfailfailfailHere ch1, ch2, and ch3 are new internal channels for interconnecting the pro-cesses; and fail is a channel over which failure of operation may be reported.Taking the second process as an example it may be created by the CMLexpression node f2 ch1 ch2 where the function node is given byfn f => fn in => fn out =>fork (rec loop d =>sync (choose [wrap (receive in,fn x => sync (send (out, f x));loop d),send(fail,())]))Here f is the function to be applied, in is the input channel and out is the outputchannel. The function fork creates a new process labelled that performs asdescribed by the recursive function loop that takes the dummy parameter d.In each recursive call the function may either report failure by send(fail,())or it may perform one step of the processing: receive the input by means ofreceive in, take the value x received and transmit the modi ed value f x bymeans of send(out,f x) after which the process repeats itself by means of loopd. The primitive choose allows to perform an unspeci ed choice between thetwo communication possibilities and wrap allows to modify a communication bypostprocessing the value received or transmitted. The sync primitive enforcessynchronisation at the right points and we refer to [16] for a discussion of thelanguage design issues involved in this; once we have arrived at the processalgebra such considerations will be of little importance to us.The overall construction of the network of processes is then the task of thepipe function de ned byrec pipe fs => fn in => fn out =>if isnil fsthen node (fn x => x) in outelse let ch = channel ()in (node (hd fs) in ch; pipe (tl fs) ch out)273 Here fs is the list of functions to be applied, in is the input channel, and outis the output channel. If the list of functions is empty we connect in and outby means of a process that applies the identity function; otherwise we create anew internal channel by means of channel () and then we create the processfor the rst function in the list and then recurse on the remainder of the list.The process algebra of CML [13] allows to give succinct representations ofthe communications taking place in CML programs. The terms of the processalgebra are called behaviours, denoted b 2 Beh, and are given byb ::= j L!t j L?t j t chanL j j forkL b j b1; b2 j b1 + b2 j rec : bwhere L Labels is a non-empty and nite set of program labels. The behaviouris associated with the pure functional computations of CML. The behavioursL!t and L?t are associated with sending and receiving values of type t overchannels with label in L, the behaviour t chanL is associated with creating anew channel with label in L and over which values of type t can be commu-nicated, and the behaviour forkL b is associated with creating a new processwith behaviour b and with label in L. Together these behaviours constitute theatomic behaviours, denoted p 2 ABeh, as may be expressed by settingp ::= j L!t j L?t j t chanL j forkL bFinally, behaviours may be composed by sequencing (as in b1; b2) and internalchoice (as in b1 + b2) and we use behaviour variables together with an explicitrec construct to express recursive behaviours.The structure of the types, denoted t 2 Typ, shall be of little concern to usin this paper and we shall therefore leave it mostly unspeci ed (but see [13]);however, we need to state that chanL is the type of a channel with labelin L over which elements of type may be communicated. Since types mightconceivably contain behaviours the notion of free variables needs to be replacedby a notion of exposed variables: we shall say that a behaviour variable isexposed in a behaviour b if it has a free occurrence that is not a subterm of anytype mentioned in b.Example 2.2 Assuming that fail is a channel of type unit chanL the typeinference system of [13] can be used to prove that pipe has type( ! ) list ! chanL1 ! chanL2 !b unitwhere b isrec 0:(fork (rec00:(L1? ; ;L2! ; 00 + L!unit))+ chanL1 ; fork (rec00:(L1? ; ;L2! ; 00 + L!unit)); 0)Thus the behaviour expresses directly that the pipe function is recursively de-ned and that it either spawns a single process or creates a channel, spawns aprocess and recurses. The spawned processes will all be recursive and they willeither report failure over a channel in L and terminate, or else input over achannel in L1, do something (as expressed by and ), output over a channelin L2 and recurse.274 The semantics of behaviours is de ned by a transition relation of the formPB=)aps PB0where PB and PB0 are mappings from process identi ers to closed behavioursand the special symbol p denoting termination. Furthermore, a is an actionthat takes place and ps is a list of the processes that take part in the action.The actions rather closely correspond to atomic behaviours and are given bya ::= j L!t?L j t chanL j forkL bIf the transition PB=)aps PB0 has a = this means that one of the behavioursin PB performed some internal computation that did not involve communica-tion; in other words it performed the atomic behaviour . If a = L!t?L thismeans that two disctinct behaviours performed a communication: one per-formed the atomic behaviour L!t and the other the atomic behaviour L?t. Fi-nally if a = chanL or a = forkL this means that one of the behaviours inPB allocated a new channel or forked a new process. Since we have coveredall possibilities of atomic behaviours we have also covered all possibilities ofactions. We refer to [13] for the precise details of the semantics as these are oflittle importance for the development of the analyses.3 Value SpacesIn the analyses we want to predict the number of times certain events mayhappen. The precision as well as the complexity of the analyses will dependupon how we count so we shall parameterise the formulation of the analyses onour notion of counting.This amounts to abstracting the non-negative integers N by a completelattice (Abs, v). As usual we write ? for the least element, > for the greatestelement, F and t for least upper bounds by a function and u for greatest lowerbounds. The abstraction is expressedR : N!m Abs that is strict (has R(0) = ?) and monotone (has R(n1) vR(n2) whenever n1 n2); hence the ordering on the natural numbers is re-ected in the abstract values. Three elements of Abs are of particular interestand we shall introduce special syntax for them:o = R(0) = ?i = R(1)m = >We cannot expect our notion of counting to be precisely re ected by Abs;indeed it is likely that we shall allow to identify for example R(2) and R(3)and perhaps even R(1) and R(2). However, we shall ensure throughout that noidenti cations involve R(0) by demanding that R 1(o) = f0g so that o reallyrepresents \did not happen".We shall be interested in two binary operations on the non-negative inte-gers. One is the operation of maximum: maxfn1; n2g is the larger of n1 andn2. In Abs we shall use the binary least upper bound operation to expressthe maximum operation. Indeed R(maxfn1; n2g) = R(n1)t R(n2) holds by275 monotonicity of R as do the laws n1 v n1tn2, n2 v n1tn2 and ntn = n. As aconsequence n1tn2 = o i both n1 and n2 equal o.The other operation is addition: n1 + n2 is the sum of n1 and n2. InAbs we shall have to de ne a function and demand that (Abs, , o) is anAbelian monoid with monotone. This ensures that we have the associativelaw n1 (n2 n3) = (n1 n2) n3, the absorption laws n o = o n = n,the commutative law n1 n2 = n2 n1 and by monotonicity we have also thelaws n1 v n1 n2 and n2 v n1 n2. As a consequence n1 n2 = o i both n1and n2 equal o. To ensure that models addition on the integers we imposethe condition 8n1; n2: R(n1+n2) v R(n1) R(n2) that is common in abstractinterpretation.De nition 3.1 A value space is a structure (Abs, v, o, i, m, , R) as de-tailed above. It is an atomic value space if i is an atom (that is o v n v iimplies that o = n or i = n).Example 3.2 One possibility is to use A3 = fo; i;mg and de ne v by o vi v m. The abstraction function R will then map 0 to o, 1 to i and all othernumbers to m. The operations t and can then be given by the following tables:t o i mo o i mi i i mm m m mo i mo o i mi i m mm m m mThis de nes an atomic value space.For two value spaces (Abs0, v0, o0, i0, m0, 0, R0) and (Abs00, v00, o00, i00,m00, 00, R00) we may construct their cartesian product (Abs, v, o, i, m, ,R) by setting Abs = Abs0 Abs00 and by de ning v, o, i, m, and Rcomponentwise. This de nes a value space but it is not atomic even if Abs0and Abs00 both are. As a consequence i = (i0; i00) will be of no concern to us;instead we use (o0; i00) and (i0;o00) as appropriate.For a value space (Abs0, v0, o0, i0, m0, 0, R0) and a non-empty set E ofevents we may construct the indexed value space (or function space) (Abs, v,o, i, m, , R) by setting Abs = E ! Abs0 (the set of total functions from Eto Abs0) and by de ning v, o, i, m, and R componentwise. This de nes avalue space that is almost never atomic; as a consequence i = e:i0 will be ofno concern to us.For indexed value spaces we may represent (f 2 E ! Abs) by (rep(f) 2E ,! Absnfog) where E ,! Absnfog denotes the set of partial functionsfrom E toAbsnfog; here rep(f) maps e to n i f(e) = n and n 6= o. In practicewe want to restrict E to be a nite set in order to obtain nite representations;we write (f 2 E !f Abs) to indicate that f is o on all but a nite numberof arguments so that such a representation is possible.4 Counting the BehavioursFor a given behaviour b and value space Abs we may ask the following fourquestions:276 benv ` : [ ]benv ` L!t : [L 7! (o;o; i;o)]benv ` L?t : [L 7! (o; i;o;o)]benv ` t chanL : [L 7! (i;o;o;o)]benv ` b : Abenv ` forkL b : [L 7! (o;o;o; i)] Abenv ` b1 : A1 benv ` b2 : A2benv ` b1; b2 : A1 A2benv ` b1 : A1 benv ` b2 : A2benv ` b1 + b2 : A1tA2benv[ 7! A] ` b : Abenv ` rec : b : Abenv ` : A if benv( ) = ATable 1: Analysis of behaviourshow many times are channels labelled by L created?how many times do channels labelled by L participate in input?how many times do channels labelled by L participate in output?and how many times are processes labelled by L generated?To answer these questions we de ne an inference system with formulaebenv ` b : Awhere LabSet = Pf (Labels) is the set of nite and non-empty subsets ofLabels and A 2 LabSet!f Abs records the required information.In this section we shall de ne the inference system for answering all fourquestions simultaneously. Hence we let Abs be the four-fold cartesian productAb4 of an atomic value space Ab; we shall leave the formulation parameterisedon the choice of Ab but a useful candidate is the three-element value space A3of Example 3.2 and this will be the choice in all examples.The idea is that A(L) = (nc; ni; no; nf) means that channels labelled byL are created at most nc times, that channels labelled by L participate in atmost ni input operations, that channels labelled by L participate in at most nooutput operations, and that processes labelled by L are generated at most nftimes. The behaviour environment benv then associates each behaviour variablewith an element of LabSet !f Abs.The analysis is de ned in Table 1. We use [ ] as a shorthand for L:(o;o;o;o)and [L 7! ~n] as a shorthand for L0:( (o;o;o;o) if L0 6= L~nif L0 = L ). Note that idenotes the designated \one"-element in each copy of Ab since it is the atoms(i;o;o;o), (o; i;o;o), (o;o; i;o), and (o;o;o; i) that are useful for increasingthe count. In the rule for forkL we are deliberately incorporating the e ectsof the forked process; to avoid doing so simply remove the \ A" component.The rules for sequencing, choice, and behaviour variables are straightforwardgiven the developments of the previous section.Note that the rule for recursion expresses a xed point property and soallows some slackness; it would be inelegant to specify a least (or greatest)277 xed point property whereas a postxed point1 could easily be accomodatedby incorporating a notion of subsumption into the rule. We decided not toincorporate a general subsumption rule and to aim for specifying as uniqueresults as the rule for recursion allows.Example 4.1 For the pipe function of Examples 2.1 and 2.2 the analysis willgive the following information (read \m" as \many"):L1: m channels created and m inputs performedL2: m outputs performedL: m outputs performed: m processes createdWhile this is evidently correct it also seems pretty uninformative; yet we shallsee that this simple analysis su ces for developing more informative analysesfor static and dynamic processor allocation.To formally express the correctness of the analysis we need a few de nitions.Given a list X of actions de ne:COUNT(X) = L:(CC(X;L); CI(X;L); CO(X;L); CF (X;L))CC(X;L): the number of elements of the form t chanL in X ,CI(X;L): the number of elements of the form L0!t?L in X ,CO(X;L): the number of elements of the form L!t?L0 in X , andCF (X;L): the number of elements of the form forkL b in X .The formal version of our explanations above about the intentions with theanalysis then amounts to the following soundness result:Theorem 4.2 If ; ` b : A and [pi0 7! b] =)a1ps1 : : :=)akpsk PB then we haveR (COUNT[a1; ; ak]) v A.where R (C)(L) = (R(c);R(i);R(o);R(f)) if C(L) = (c; i; o; f).5 ImplementationIt is well-known that compositional speci cations of program analyses (whetheras abstract interpretations or annotated type systems) are not the most e -cient way of obtaining the actual solutions. We therefore demonstrate how theinference problem may be transformed to an equation solving problem that isindependent of the syntax of our process algebra and where standard algorith-mic techniques may be applied. This approach also carries over to the inferencesystems for processor allocation developed subsequently.The rst step is to generate the set of equations. To show that this doesnot a ect the set of solutions we shall be careful to avoid undesirable \cross-over" between equations generated from disjoint syntactic components of thebehaviour. One possible cause for such \cross-over" is that behaviour variables1We take a postxed point of a function f to be an argument n such that f(n) v n.278 E [[B : $ : ]] = fh$i = [ ] gE [[B : $ : L!t]] = fh$i = [L 7! (o;o; i;o)] gE [[B : $ : L?t]] = fh$i = [L 7! (o; i;o;o)] gE [[B : $ : t chanL]] = fh$i = [L 7! (i;o;o;o)] gE [[B : $ : forkL b]] = fh$i = [L 7! (o;o;o; i)] h$1i g [ E [[B : $1 : b]]E [[B : $ : b1; b2]] = fh$i = h$1i h$2i g [ E [[B : $1 : b1]] [ E [[B : $2 : b2]]E [[B : $ : b1 + b2]] = fh$i = h$1ith$2i g [ E [[B : $1 : b1]] [ E [[B : $2 : b2]]E [[B : $ : ]] = fh$i = h i gE [[B : $ : rec : b]] = CLOSE$ ( fh$i = h$1i; h$i = h i g [ E [[B : $1 : b]] )Table 2: Constructing the equation systemmay be bound in more than one rec; one classical solution to this is to requirethat the overall behaviour be alpha-renamed such that this does not occur; thesolution we adopt avoids this requirement by suitable modi cation of the equa-tion system. Another possible cause for \cross-over" is that disjoint syntacticcomponents of the overall behaviour may nonetheless have components thatsyntactically appear the same; we avoid this problem by the standard use oftree-addresses (denoted $).The function E for generating the equations for the overall behaviour Bachieves this by the call E [[B : " : b]] where " denotes the empty tree-address. Ingeneral B : $ : b indicates that the subtree of B rooted at $ is of the form band the result of E [[B : $ : b]] is the set of equations produced for b. The formalde nition is given in Table 2.The key idea is that E [[B : $ : b]] operates with ow variables of the formh$0i and h 0i. We maintain the invariant that all $0 occurring in E [[B : $ : b]]are (possibly empty) prolongations of$ and that all 0 occurring in E [[B : $ : b]]are exposed in b. To maintain this invariant in the case of recursion we de neCLOSE$ (E) = f (L[h$i=h i] = R[h$i=h i]) j (L = R) 2 E g(although it would actually su ce to apply the substitution [h$i=h i] on therighthand sides of equations and it would be correct to remove the trivial equa-tion produced).Terms of the equations are formal terms over the ow variables (that rangeover the complete lattice LabSet ! Abs), the operations and t and theconstants (that are elements of the complete lattice LabSet ! Abs). Thusall terms are monotonic in their free ow variables. A solution to a set E ofequations is a partial function from ow variables to LabSet ! Abs suchthat all ow variables in E are in the domain of and such that all equations(L = R) of E have (L) = (R) where is extended to formal terms in theobvious way. We write j= E whenever this is the case.Theorem 5.1 [ ] ` b : A i 9 : j= E [[b : " : b]] ^ (h"i) = A.279 Corollary 5.2 The least (or greatest) A such that [ ] ` b : A is (h"i) for theleast (or greatest) such that j= E [[b : " : b]].We have now transformed our inference problem to a form where the stan-dard algorithmic techniques can be exploited. These include simpli cations ofthe equation system, partitioning the equation system into strongly connectedcomponents processed in (reverse) topological order, widening to ensure con-vergence when Abs does not have nite height etc.; a good overview of usefultechniques may be found in [2, 7, 10, 17]. Also the ow variables may bedecomposed to families of ow variables over simpler value spaces.6 Static Processor AllocationThe idea behind the static processor allocation is that all processes with thesame label will be placed on the same processor and we would therefore like toknow what requirements this puts on the processor. To obtain such informationwe shall extend the simple counting analysis of Section 4 to associate informa-tion with the process labels mentioned in a given behaviour b. For each processlabel La we therefore ask the four questions of Section 4 accumulating the to-tal information for all processes with label La: how many times are channelslabelled by L created, how many times do channels labelled by L participatein input, how many times do channels labelled by L participate in output, andhow many times are processes labelled by L generated?Example 6.1 Let us return to the pipe function of Examples 2.1 and 2.2 andsuppose that we want to perform static processor allocation. This means thatall instances of the processes labelled will reside on the same processor. Theanalysis should therefore estimate the total requirements of these processes asfollows:main: L1: m channels created : L1: m inputs performed: m processes createdL2: m outputs performedL: m outputs performedNote that even though each process labelled by can only communicate onceover L we can generate many such processes and their combined behaviour isto communicate many times over L. It follows from this analysis that the mainprogram does not in itself communicate over L2 or L and that the processes donot by themselves spawn new processes.Now suppose we have a network of processors that may be explained graph-ically as follows:280 &%'$&%'$&%'$P2P3P1@@@@@One way to place our processes is to place the main program on P1 and all theprocesses labelled on P2. This requires support for multitasking on P2 andfor multiplexing (over L1) on P1 and P2.The analysis (speci ed in Table 3) is obtained by modifying the inferencesystem of Section 4 to have formulaebenv ` b : A & Pwhere A 2 LabSet !f Abs as before and the new ingredient isP : LabSet !f (LabSet!f Abs)The idea is that if some process is labelled La then P (La) describes the totalrequirements of all processes labelled by La. The behaviour environment benvis an extension of that of Section 4 in that it associates pairs A & P with thebehaviour variables. Note that in the rule for forkL we have removed the\ A" component from the local e ect; instead it is incorporated in the globale ect for L.To express the correctness of the analysis we need to keep track of therelationship between the process identi ers and the associated labels. So letpenv be a mapping from process identi ers to elements La of LabSet. Weshall say that penv respects the derivation sequence PB =)a1ps1 : : : =)akpsk PB0if whenever (ai; psi) have the form (forkL b; (pi1; pi2)) then penv(pi2) = L;this ensures that the newly created process (pi2) indeed has a label (in L) asreported by the semantics.We can now rede ne the function COUNT of Section 4. Given a list X ofpairs of actions and lists of process identi ers de neCOUNTpenv(X ) = La: L:(CCLa(X ; L); CILa(X ; L); COLa(X ; L); CFLa(X ; L))CCLa(X ; L): the number of elements of the form (t chanL; pi) in Xwhere penv(pi) = La,CILa(X ; L): the number of elements of the form (L0!t?L; (pi0; pi)) in X ,where penv(pi) = La,COLa(X ; L): the number of elements of the form (L!t?L0; (pi; pi0)) in X ,where penv(pi) = La, andCFLa(X ; L): the number of elements of the form (forkLb; (pi; pi0)) in Xwhere penv(pi) = La.Soundness of the analysis then amounts to:281 benv ` : [ ] & [ ]benv ` L!t : [L 7! (o;o; i;o)] & [ ]benv ` L?t : [L 7! (o; i;o;o)] & [ ]benv ` t chanL : [L 7! (i;o;o;o)] & [ ]benv ` b : A & Pbenv ` forkL b : [L 7! (o;o;o; i)] & ([L 7! A] P )benv ` b1 : A1 & P1 benv ` b2 : A2 & P2benv ` b1; b2 : A1 A2 & P1 P2benv ` b1 : A1 & P1 benv ` b2 : A2 & P2benv ` b1 + b2 : A1tA2 & P1tP2benv[ 7! A & P ] ` b : A & Pbenv ` rec : b : A & Pbenv ` : A & P if benv( ) = A & PTable 3: Analysis for static process allocationTheorem 6.2 Assume that ; ` b : A & P and [pi0 7! b] =)a1ps1 : : : =)akpskPB and let penv be a mapping from process identi ers to elements of LabSetrespecting the above derivation sequence and such that penv(pi0) = L0. Wethen haveR(COUNTpenv[(a1; ps1); ; (ak; psk)]) v (P [L0 7! A])where R (C)(La)(L) = (R(c);R(i);R(o);R(f)) if C(La)(L) = (c; i; o; f).Note that the lefthand side of the inequality counts the number of operationsfor all processes whose labels is given (by La); hence our information is usefulfor static processor allocation.To obtain an e cient implementation of the analysis it is once more prof-itable to generate an equation system. This is hardly any di erent from theapproach of Section 5 except that by now there is even greater scope for de-composing the ow variables into families of ow variables over simpler valuespaces.7 Dynamic Processor AllocationThe idea behind the dynamic processor allocation is that the decision of how toplace processes on processors is taken dynamically. Again we will be interestedin knowing which requirements this puts on the processor but in contrast tothe previous section we are only concerned with a single process rather than allprocesses with a given label. We shall now modify the analysis of Section 6 toassociate worst-case information with the process labels rather than accumulat-ing the total information. For each process label La we therefore ask the four282 benv ` b : A & Pbenv ` forkL b : [L 7! (o;o;o; i)] & ([L 7! A]tP )benv ` b1 : A1 & P1 benv ` b2 : A2 & P2benv ` b1; b2 : A1 A2 & P1tP2benv[ 7! A] ` b : A & Pbenv ` rec : b : A & Pbenv ` : A & [ ] if benv( ) = ATable 4: Analysis for dynamic process allocationquestions of Section 4 taking the maximum information over all processes withlabel La: how many times are channels labelled by L created, how many timesdo channels labelled by L participate in input, how many times do channelslabelled by L participate in output, and how many times are processes labelledby L generated?Example 7.1 Let us return to the pipe function of Examples 2.1 and 2.2 andsuppose that we want to perform dynamic processor allocation. This means thatall the processes labelled need not reside on the same processor. The analysisshould therefore estimate the maximal requirements of the instances of theseprocesses as follows:main: L1: m channels created : L1: m inputs performed: m processes createdL2: m outputs performedL: i output performedNote that now we do record that each individual process labelled by actuallyonly communicates over L at most once.Returning to the processor network of Example 6.1 we may allocate the mainprogram on P1 and the remaining processes on P2 and P3 (and possibly P1 aswell): say f1 and f3 on P2 and f2 and id on P3. Facilities for multitaskingare needed on P2 and P3 and facilities for multiplexing on all of P1, P2 andP3.The inference system still has formulaebenv ` b : A & Pwhere A and P are as in Section 6 and now benv is as in Section 4: it does notincorporate the P component2. Most of the axioms and rules are as in Table 3;the modi cations are listed in Table 4.A di erence from Section 6 is that now we need to keep track of the indi-vidual process identi ers. We therefore rede ne the function COUNTpenv asfollows:2It could be as in Section 6 as well because we now combine P components using t ratherthan .283 COUNTpenv(X ) = La: L:((CCPI(X ; L); CIPI(X ; L); COPI(X ; L); CFPI(X ; L))where PI = penv1(La))CCPI(X ; L): the maximum over all pi 2 PI of the number of elementsof the form (t chanL; pi) in X ,CIPI(X ; L): the maximum over all pi 2 PI of the number of elementsof the form (L0!t?L; (pi0; pi)) in X ,COPI(X ; L): the maximum over all pi 2 PI of the number of elementsof the form (L!t?L0; (pi; pi0)) in X , andCFPI(X ; L): the maximum over all pi 2 PI of the number of elementsof the form (forkLb; (pi; pi0)) in X .Soundness of the analysis then amounts to:Theorem 7.2 Assume that ; ` b : A & P and [pi0 7! b] =)a1ps1 : : : =)akpskPB and let penv be a mapping from process identi ers to elements of LabSetrespecting the above derivation sequence and such that penv(pi0) = L0. Wethen haveR(COUNTpenv[(a1; ps1); ; (ak; psk)]) v (Pt[L0 7! A])where R is as in Theorem 6.2.Note that the lefthand side of the inequality gives the maximum number ofoperations over all processes with a given label; hence our information is usefulfor dynamic processor allocation.To obtain an e cient implementation of the analysis it is once more prof-itable to generate an equation system and the remarks at the end of the previoussection still apply.8 ConclusionThe speci cations of the analyses for static and dynamic allocation have much incommon; the major di erence of course being that for static processor allocationwe accumulate the total numbers whereas for dynamic processor allocation wecalculate the maximum; a minor di erence being that for the static analysis itwas crucial to let behaviour environments include the P component whereasfor the dynamic analysis this was hardly of any importance.This di erence in approach is reminiscent of the di erence between theformulation of MFP-style and MOP-style analyses: in the former the e ects ofpaths (corresponding to process identi ers with the same label set) are mergedalong the way whereas in the latter the paths (corresponding to the processidenti ers) have to be kept separate and their e ects can only be merged whenthe propagation of e ects has taken place.Acknowledgements. We would like to thank Torben Amtoft for many in-teresting discussions. This research has been funded in part by the LOMAPS(ESPRIT BRA) and DART (Danish Science Research Council) projects.284 References[1] T.Amtoft, F.Nielson, H.R.Nielson: Type and behaviour reconstruction for higher-order concurrent programs. This proceedings.[2] J.Cai, R.Paige: Program Derivation by Fixed Point Computation. Science ofComputer Programming 11, pp. 197{261, 1989.[3] R. Cridlig, E.Goubault: Semantics and analysis of Linda-based languages. Proc.Static Analysis, Springer Lecture Notes in Computer Science 724, 1993.[4] C.E.McDowell: A practical algorithm for static analysis of parallel programs.Journal of parallel and distributed computing 6, 1989.[5] A.Giacalone, P.Mishra, S.Prasad: Operational and Algebraic Semantics for Facile:a Symmetric Integration of Concurrent and Functional Programming. Proc.ICALP'90, Springer Lecture Notes in Computer Science 443, 1990.[6] K.Havelund, K.G.Larsen: The Fork Calculus. Proc. ICALP'93, Springer LectureNotes in Computer Science 700, 1993.[7] M.S.Hecht: Flow Analysis of Computer Programs, North-Holland, 1977.[8] Y.-C.Hung, G.-H.Chen: Reverse reachability analysis: a new technique for dead-lock detection on communicating nite state machines. Software | Practice andExperience 23, 1993.[9] S.Jagannathan, S.Week: Analysing stores and references in a parallel symboliclanguage. Proc. L&FP;, 1994.[10] M.Jourdan, D.Parigot: Techniques for Improving Grammar Flow Analysis. Proc.ESOP'90, Springer Lecture Notes in Computer Science 432, pp. 240{255, 1990.[11] N. Mercouro : An algorithm for analysing communicating processes. Proc. ofMFPS, Springer Lecture Notes in Computer Science 598, 1992.[12] F.Nielson, H.R.Nielson: From CML to Process Algebras. Proc. CONCUR'93,Springer Lecture Notes in Computer Science 715, 1993.[13] H.R.Nielson, F.Nielson: Higher-Order Concurrent Programs with Finite Commu-nication Topology. Proc. POPL'94, pp. 84{97, ACM Press, 1994.[14] F.Nielson, H.R.Nielson: Constraints for Polymorphic Behaviours for ConcurrentML. Proc. CCL'94, Springer Lecture Notes in Computer Science 845, 1994.[15] J.H.Reif, S.A.Smolka: Data ow analysis of distributed communicating processes.International Journal of Parallel Programs 19, 1990.[16] J.R.Reppy: Concurrent ML: Design, Application and Semantics. Springer LectureNotes in Computer Science 693, pp. 165{198, 1993.[17] R.Tarjan: Iterative Algorithms for Global Flow Analysis. In J.Traub (ed.), Algo-rithms and Complexity, pp. 91{102, Academic Press, 1976.[18] B.Thomsen. Personal communication, May 1994.285 Termination of order-sorted rewritingPeter C. lveczkyDepartment of InformaticsUniversity of OsloE-mail: peterol@i .uio.noAbstractWe present a method for proving termination of order-sorted rewritesystems by transforming an order-sorted rewrite system into an unsortedone such that termination of the latter implies termination of the order-sorted system. The method is inspired by ideas of Gnaedig and Ganzingerand contains as special cases the method presented in [Gn92a] and themethod that simply ignores sort information.1 IntroductionOrder-sorted speci cations, i. e. many-sorted speci cations where the set of sortsis partially ordered, has been introduced to provide for a more powerful typeconcept, allowing us to express partiality of functions, error handling and subtypeinheritance [GM88]. In languages like OBJ3 [GW88], order-sorted speci cationshave an operational semantics based on rewriting.One of the most important properties a rewrite system may have is termina-tion, which means that no in nite computation can take place. Terminationis especially important in completion procedures, and last but not least, prov-ing termination of order-sorted rewrite systems ensures termination of programswritten in languages like OBJ3.In general it is undecidable whether a rewrite system terminates, but numer-ous techniques (e. g. based on the lexicographic path ordering [KL80, Der87]),have been developed that with varying degrees of success may be used to provetermination of unsorted rewrite systems. These techniques have been used toprove termination of order-sorted systems by simply ignoring sort information[CH93, Gan91, GKK90, Gn92b, Wal92]. However, this approach is not goodenough since many order-sorted systems terminate where the corresponding un-sorted system would not terminate. This is illustrated by the following example,286 taken from [Gn92a] where the problem of order-sorted termination was rst ad-dressed.Example 1 This system de nes a predicate is even over the integers. The ideais to de ne it over the positive numbers rst and to extend it to the negativenumbers by using the operation opp.sorts: Zero; NzNeg; Neg; NzPos; Pos; Int; BoolZero Neg IntZero Pos IntNzNeg NegNzPos Posfuncs: 0 : Zeros : Pos!NzPosp : Neg!NzNegtrue : Boolfalse : Boolis even : Int!Boolopp : NzNeg!NzPosrules: is even(0) !trueis even(s(0)) !false(8x :Pos) is even(s(s(x))) !is even(x)(8y :NzNeg) is even(y) !is even(opp(y))opp(p(0)) !s(0)(8y : NzNeg) opp(p(y)) !s(opp(y))The system is terminating, the normal form of any ground term is even(t)being either true or false. But if sorts were ignored, the system would notterminate since the fourth rule would generate the in nite rewrite sequenceis even(y) !is even(opp(y))is even(opp(opp(y))) !However, the system does terminate because opp(y : NzNeg) is of sortNzPos, so y cannot be instantiated with opp(t) for any term t.In Ganzinger [Gan91], where an order-sorted rewrite system is transformed intoa many-sorted equational rewrite system, each function declaration f : w!s in-duces a (many-sorted) function symbol fw!s. This idea was adopted by Gnaedig[Gn92a] where these labelled function symbols are used to prove termination oforder-sorted rewrite systems where the signature is minimal, i. e. if f : w!s andf : w0!s0 are two function declarations then w and w0 must be incomparablewith respect to . Termination can then be proven if l0 lpo r0 for every rulel !r, t0 being t where each function symbol f is replaced with the \correspond-ing" symbol fw!s. Unfortunately, not many signatures are minimal, and even for287 those that are minimal, this approach is not very strong (e. g. this method cannotshow termination of the above system even though the signature is minimal).We will also pursue the idea of labelling each function symbol in a term, but forour purpose the best result is obtained when a function symbol is labelled withthe smallest sorts of its arguments. In the previous example the fourth rule wouldbe labelledis evenNzNeg(y) !is evenNzPos(oppNzNeg(y))The symbols is evenNzNeg and is evenNzPos may then be treated as distinct sym-bols in e. g. the lexicographic path ordering making the inequalityis evenNzNeg(y) lpo is evenNzPos(oppNzNeg(y))valid with precedence isevenNzNeg F is evenNzPos and is evenNzNeg F oppNzNeg .We will use such labelling to transform an order-sorted rewrite system R into anunsorted rewrite system R0 such that R terminates whenever R0 does. We canthus use unsorted termination orderings to prove that R0, and henceR terminates.However, special features of termination orderings imply that the best result afterapplying such an ordering is not guaranteed when the transformation R0 is themost accurate. We will keep this in mind and will therefore exemplify our methodwith the lexicographic path ordering.This paper is an extended abstract of [ lv94] where a more thorough investigationon the subject of order-sorted termination as well as the correctness proofs forclaims in this paper can be found.2 Basic notions2.1 Order-sorted rewritingWe will only introduce the syntactic aspects of order-sorted algebra and refer to[GM88] and [Wal92] for semantics of order-sorted speci cations. In this paper,notions and notations mainly follow [GM88] and [GKK90].Order-sorted algebra is based on a subsort relation, a partial order on a setS of sorts. We write s0 < s when s0 s and s0 6= s and for the symmetricrelation of . The subsort relation can be extended to n-tuples of sorts (oftendenoted w) where w = hs1; : : : ; sni hs01; : : : ;s01i = w0 i si s0i for i = 1::nand w < w0 i w w0 and w 6= w0. We write w l w0 when w < w0 andw and w0 only di ers in ith position, i. e. w = hs1; : : : ; si 1; si; si+1; : : : ; sni,w0 = hs1; : : : ; si 1; s0i; si+1; : : : ; sni and si j ?The type system contains the following components: the binary function typeconstructor !, the constant type Int, the possibility for creating recursive types,and two more constant types >, and ?. Moreover, there is a subtype relation,written . In contrast, safety analysis uses an abstract domain containing setsof syntactic occurrences of abstractions and the constant Int.In slogan-form, our result reads: 301 Flow analysis + Safety checks =Simple types + Recursive types + > + ? + SubtypingEach component of the type system captures a facet of ow analysis:The function type constructor ! corresponds to a set of abstractions. In-tuitively, a function type is less concrete than a set of abstractions. Indeed,the other components of the type system are essential to make it accept thesame programs as the safety analysis.The constant Int is used for the same purpose in both systems. For simplic-ity, we do not consider other base types, or product and sum constructors,etc. Such constructs can be handled by techniques that are similar to theones we will present.Recursive types are needed in order that safety analysis accepts all programsthat do not contain constants.The constant > corresponds to the largest possible set of ow information.This type is needed for variables which can hold both a function and abase value. Intuitively, a program with such a variable should be typeincorrect. However, the ow-based analysis may detect that this variable isonly passed around but never actually used. For the type system to havethat capability, > is required.The constant ? corresponds to the empty set of ow information. Thistype is needed for variables which are used both as a function and as a basevalue. Intuitively, a program that uses a variable in both these ways shouldbe type incorrect. However, the ow-based analysis may detect that thispart of the program will never be executed. For the type system to havethat capability, ? is required.Subtyping is needed to capture ow of information. Intuitively, if informa-tion ows from A to B, then the type of A will be a subtype of the type ofB.Palsberg and Schwartzbach [10, 11] proved that the system without ? acceptsat most as many programs as safety analysis. In this paper we present the typesystem which accepts exactly the same programs as safety analysis. This may beseen as a natural culmination of the previous results.1.3 ExamplesOur example language is a -calculus, generated by the following grammar:E ::= x j x:E j E1E2 j 0 j succ E302 Programs that yield a run-time error include (0 x), succ( x:x), and (succ 0)(x),because 0 is not a function, succ cannot be applied to functions, and (succ 0) isnot a function. These programs are not typable and they are rejected by safetyanalysis. Some programs can be typed in the type system without the use of ?and >, for examplex:xx : : ! ;where E : t means \E has type t". Some programs require the use of >, forexample( f:( x:fI)(f0))I : > ;where I = x:x. Note that > is the only type of ( f:( x:fI)(f0))I. Someprograms require the use of ?, for examplex:x(succ x) : ? ! t for any t.Both type inference and safety analysis can be phrased as solving a system ofconstraints, derived from the program text. We will now present the constraintsystems for the last of the above examples. For notational convenience, we giveeach of the two occurrences of x a label so that the -term reads x:x1(succ x2).For brevity, let E = x:x1(succ x2). The constraint system for type inferencelooks as follows:x! [[x1(succ x2)]][[E]][[x1]][[succ x2]]! [[x1(succ x2)]]x[[x1]]x[[x2]]Int[[succ x2]][[x2]]IntHere, the symbols x, [[x1]], [[x2]], [[succ x2]], [[x1(succ x2)]], [[E]] are type vari-ables. Solving this constraint system yields that the possible types for the -termx:x(succ x) are > and ? ! t for any type t. Among these, ? ! ? is a leasttype. In general, however, such a constraint system need not have a least solution.The constraint system for safety analysis looks as follows:fEg[[E]][[x1]] fEgx[[x1]]x[[x2]]fEg [[x1]] ) [[succ x2]] xfEg [[x1]] ) [[x1(succ x2)]] [[x1(succ x2)]]fIntg[[succ x2]][[x2]] fIntg303 If such a constraint system is solvable, then it has a least solution. This particularconstraint system is indeed solvable, and the least solution is the mapping ',where '([[E]]) = fEg'([[succ x2]]) = fIntg'([[x1(succ x2)]]) = '(x) = '([[x1]]) = '([[x2]]) = ;In the following two sections we present the type system and the safety anal-ysis, and in Section 4 we prove that they accept the same programs.2 The type system2.1 TypesDe nition 1 Let = f!; Int;?;>g be the ranked alphabet where! is binaryand Int;?;> are nullary. A type is a regular tree over . A path from the root ofsuch a tree is a string over f0; 1g, where 0 indicates \left subtree" and 1 indicates\right subtree".2De nition 2 We represent a type by a term, that is, a partial functiont : f0; 1g !with domain D(t) where tmaps each path from the root of the type to the symbolat the end of the path. The set of all such terms is denoted T .2Following [6], we nitely represent a term by a so-called term automaton, asfollows.De nition 3 A term automaton over is a tupleM = (Q; ; q0; ; `)where:Q is a nite set of states,q0 2 Q is the start state,: Q f0; 1g ! Q is a partial function called the transition function, and` : Q! is a (total) labeling function,304 such that for any state q 2 Q, if `(q) 2 f!g thenfi j (q; i) is de nedg = f0; 1gand if `(q) 2 fInt;?;>g thenfi j (q; i) is de nedg = ; :The partial function extends naturally to a partial functionb: Q f0; 1g ! Qinductively as follows:b(q; ) = qb(q; i) = (b(q; ); i) ; for i 2 f0; 1g.The term represented by M is the termtM = :`(b(q0; )) :2Intuitively, tM( ) is determined by starting in the start state q0 and scanningthe input , following transitions of M as far as possible. If it is not possible toscan all of because some i-transition along the way does not exist, then tM( )is unde ned. If on the other hand M scans the entire input and ends up instate q, then tM( ) = `(q).Types are ordered by the subtype relation , as follows.De nition 4 The parity of 2 f0; 1g is the number mod 2 of 0's in . Theparity of is denoted . A string is said to be even if = 0 and odd if= 1. Let 0 be the partial order on given by? 0 ! and ! 0 > and? 0 Int and Int 0 >and let 1 be its reverse> 1 ! and ! 1 ? and> 1 Int and Int 1 ?For s; t 2 T , de ne s t if s( ) t( ) for all 2 D(s) \ D(t).2Kozen, Palsberg, and Schwartzbach [6] showed that the relation is equiva-lent to the order de ned by Amadio and Cardelli [1]. The relation is a partialorder, and if s! t s0 ! t0, then s0 s and t t0 [1, 6].305 2.2 Type rulesIf E is a -term, t is a type, and A is a type environment, i.e. a partial functionassigning types to variables, then the judgementA ` E : tmeans that E has the type t in the environment A. Formally, this holds whenthe judgement is derivable using the following six rules:A ` 0 : Int(1)A ` E : IntA ` succ E : Int(2)A ` x : t (provided A(x) = t)(3)A[x s] ` E : tA ` x:E : s! t(4)A ` E : s! t A ` F : sA ` EF : t(5)A ` E : s s tA ` E : t(6)The rst ve rules are the usual rules for simple types and the last rule is therule of subsumption.The type system has the subject reduction property, that is, if A ` E : t isderivable and E -reduces to E 0, then A ` E 0 : t is derivable. This is proved bystraightforward induction on the structure of the derivation of A ` E : t.2.3 ConstraintsGiven a -termE, the type inference problem can be rephrased in terms of solvinga system of type constraints. Assume that E has been -converted so that allbound variables are distinct. Let XE be the set of -variables x occurring in E,and let YE be a set of variables disjoint fromXE consisting of one variable [[F ]] foreach occurrence of a subterm F of E. (The notation [[F ]] is ambiguous becausethere may be more than one occurrence of F in E. However, it will always beclear from context which occurrence is meant.) We generate the following systemof inequalities over XE [ YE:for every occurrence in E of a subterm of the form 0, the inequalityInt[[0]] ;306 for every occurrence in E of a subterm of the form succ F , the two inequal-ities Int[[succ F ]][[F ]]Int ;for every occurrence in E of a subterm of the form x:F , the inequality(x! [[F ]]) x:F[[ x:F ]] ;for every occurrence in E of a subterm of the form GH, the inequality[[G]]([[H]]! [[GH]])GH ;for every occurrence in E of a -variable x, the inequalityx[[x]] :The subscripts are present to ease notation in Section 4.1; they have no semanticimpact and will be explicitly written only in Section 4.1.Denote by T (E) the system of constraints generated from E in this fashion.The solutions of T (E) over T correspond to the possible type annotations of Ein a sense made precise by Theorem 5.Let A be a type environment assigning a type to each -variable occurringfreely in E. If is a function assigning a type to each variable in XE [ YE, wesay that extends A if A and agree on the domain of A.Theorem 5 The judgement A ` E : t is derivable if and only if there exists asolution of T (E) extending A such that ([[E]]) = t. In particular, if E isclosed, then E is typable with type t if and only if there exists a solution ofT (E) such that ([[E]]) = t.Proof. Similar to the proof of Theorem 2.1 in the journal version of [7], inoutline as follows. Given a solution of the constraint system, it is straightforwardto construct a derivation of A ` M : t. Conversely, observe that if A ` M : t isderivable, then there exists a derivation of A ` M : t such that each use of one ofthe ordinary rules is followed by exactly one use of the subsumption rule. Theapproach in for example [14, 11] then gives a set of inequalities of the desiredform.2307 3 The safety analysisFollowing [10, 11], we will use a ow analysis as a basis for a safety analysis. Givena -term E, assume that E has been -converted so that all bound variables aredistinct. The set Abs(E) is the set of subterms of E of the form x:F . The setCl(E) is the powerset of Abs(E) [ fIntg. Safety analysis of a -term E can bephrased as solving the following system of constraints over XE [ YE where typevariables range over Cl(E).For every occurrence in E of a subterm of the form 0, the constraintfIntg[[0]] ;for every occurrence in E of a subterm of the form succ F , the two con-straintsfIntg[[succ F ]][[F ]] fIntgwhere the latter provides a safety check;for every occurrence in E of a subterm of the form x:F , the constraint(f x:Fg) x:F[[ x:F ]] ;for every occurrence in E of a subterm of the form GH, the constraint[[G]](Abs(E))GH ;which provides a safety check;for every occurrence in E of a -variable x, the constraintx[[x]] ;for every occurrence in E of a subterm of the form x:F , and for everyoccurrence in E of a subterm of the form GH, the constraints(f x:Fg) x:F [[G]] ) [[H]] x(f x:Fg) x:F [[G]] ) [[F ]] [[GH]] :Again, the subscripts are present to ease notation in Section 4.1; they have nosemantic impact and will be explicitly written only in Section 4.1.Denote by C(E) the system of constraints generated from E in this fashion.A solution of C(E) assigns an element of Cl(E) to each type variable such thatall constraints are satis ed. Solutions are ordered by variable-wise set inclusion.See [10, 12] for a cubic time algorithm that given E computes the least solutionof C(E) or decides that none exists. See [9] for a proof of the following subjectreduction property. If E -reduces to E 0, and C(E) has solution ', then C(E 0)also has solution '.308 4 Equivalence4.1 Deductive ClosuresWe now introduce two auxiliary constraint systems called C(E) and T (E). Theymay be thought of as \deductive closures" of C(E) and T (E). We then showthat they are isomorphic (Theorem 9).De nition 6 For every -term E, de ne C(E) to be the smallest set such that:The non-conditional constraints of C(E) are members of C(E).If a constraint c) K is in C(E) and c is in C(E), then K is in C(E).For s 2 XE [ YE, if r s and s t both are in C(E), then r t is inC(E).Notice that every constraint in C(E) is of the form W W 0, where W is ofthe forms V , fIntg, or (f x:Fg) x:F , and where W 0 is of the forms V , fIntg, or(Abs(E))GH , for V 2 XE [ YE.2De nition 7 For every -term E, de ne T (E) to be the smallest set such that:T (E) T (E).If (s! t) x:F (s0 !t0)GH is in T (E), then s0 s and t t0 are in T (E).For s 2 XE [ YE, if r s and s t both are in T (E), then r t is inT (E).Notice that every constraint in T (E) is of the form W W 0 where W is of theforms V , Int, or (V ! V 0) x:F , and whereW 0 is of the form V , Int, or (V ! V0)GH ,for V; V 0 2 XE [ YE.2We will now present the de nition of two functions I and J , one from C(E)to T (E) and one from T (E) to C(E). After the de nition we prove that they arewell-de ned and each others inverses.De nition 8 The functionsI : C(E)! T (E)J : T (E)! C(E)are de ned as follows.I(W W 0) = (LI(W ) LI(W 0))J (W W 0) = (LJ (W ) LJ (W 0))309 where the functions LI and LJ are:LI(W ) =8>><>>: Wif W 2 XE [ YEIntif W = fIntg(x! [[F ]]) x:Fif W = (f x:Fg) x:F([[H]]! [[GH]])GH if W = (Abs(E))GHLJ (W ) =8>><>>: Wif W 2 XE [ YEfIntgif W = Int(f x:Fg) x:F if W = (x! [[F ]]) x:F(Abs(E))GH if W = ([[H]]! [[GH]])GH2Theorem 9 The sets C(E) and T (E) are isomorphic, and I and J are bijec-tions and each others inverses.Proof. If I and J are well-de ned, then clearly they are inverses of each otherand thus bijections, so C(E) and T (E) are isomorphic.First we show that I is well-de ned. We proceed by induction on the con-struction of C(E). In the base case, consider the non-conditional constraints ofC(E) and observe that for those we have:C(E)T (E)fIntg [[0]]Int [[0]]fIntg [[succ F ]]Int [[succ F ]][[F ]] fIntg[[F ]] Int(f x:Fg) x:F [[ x:F ]] (x! [[F ]]) x:F [[ x:F ]][[G]] (Abs(E))GH [[G]] ([[H]]! [[GH]])GHx [[x]]x [[x]]It follows that the lemma holds in the base case.In the induction step, consider rst the constraints(f x:Fg) x:F [[G]] ) [[H]] x(f x:Fg) x:F [[G]] ) [[F ]] [[GH]]in C(E) and suppose (f x:Fg) x:F [[G]] is in C(E). By the induction hypoth-esis, (x ! [[F ]]) x:F [[G]] is in T (E). Moreover, [[G]] ([[H]]! [[GH]])GH is inT (E) and thus also in T (E). Hence, (x ! [[F ]]) x:F ([[H]] ! [[GH]])GH is inT (E), so also [[H]] x and [[F ]] [[GH]] are in T (E).Consider then r s and s t in C(E), and suppose s 2 XE [ YE. Bythe induction hypothesis, LI (r) LI(s) and LI(s) LI(t) are in T (E). Froms 2 XE [ YE we get LI(s) = s, so LI(r) LI(t) is in T (E).Then we show that J is well-de ned. We proceed by induction on the con-struction of T (E). In the base case, consider the constraints of T (E). Using thesame table as above we observe that J is well-de ned on all these constraints.310 In the induction step, consider rst (x ! [[F ]]) x:F ([[H]] ! [[GH]])GH inT (E). It is su cient to prove that LJ ([[H]]) LJ (x) and LJ ([[F ]]) LJ ([[GH]])are in C(E), or equivalently, that [[H]] x and [[F ]] [[GH]] are in C(E). InC(E) we have(f x:Fg) x:F [[G]] ) [[H]] x(f x:Fg) x:F [[G]] ) [[F ]] [[GH]] :Moreover, (f x:Fg) x:F [[G]] is in C(E). Hence, [[H]] x and [[F ]] [[GH]] arein C(E).Consider then r s and s t in T (E), and suppose s 2 XE [ YE. By theinduction hypothesis, LJ (r) LJ (s) and LJ (s) LJ (t) are in C(E). Froms 2 XE [ YE we get LJ (s) = s, so LJ (r) LJ (t) is in C(E).24.2 The equivalence proofDe nition 10 For every -term E, let Tmap(E) be the set of total functionsfrom XE [ YE to T and let Cmap(E) be the set of total functions from XE [ YEto Cl(E).2The following construction is the key to mapping ow information to types.De nition 11 For every -term E, ' 2 Cmap(E), and q0 2 Cl(E), de ne theterm automaton A(E;'; q0) as follows:A(E;'; q0) = (Cl(E); ; q0; ; `)where:(f x1:E1; : : : ; xn:Eng; 0) =Tni=1 '(xi)for n > 0(f x1:E1; : : : ; xn:Eng; 1) =Sni=1 '[[Ei]]for n > 0`(q) =8>><>>: ? if q = ;Int if q = fIntg! if q Abs(E) ^ q 6= ;> otherwise2Lemma 12 Suppose ' 2 Cmap(E) and S1; S2 2 Cl(E). If S1 S2, thentA(E;';S1) tA(E;';S2).311 Proof. De ne the orderings 0, 1 on Cl(E) such that 0 equals and 1equals . The desired conclusion follows immediately from the property that if2 D(tA(E;';S1)) \ D(tA(E;';S2)), then b(S1; )b(S2; ). This property isproved by straightforward induction on the length of .2We can now prove that the type system and the safety analysis accept thesame programs.Theorem 13 For every -term E, the following seven conditions are equivalent:1. C(E) is solvable.2. T (E) is solvable.3. T (E) is solvable.4. C(E) is solvable.5. C(E) does not contain constraints of the forms fIntg Abs(E) or f x:FgfIntg.6. T (E) does not contain constraints of the forms Int V ! V 0 or V ! V 0Int, where V; V 0 2 XE [ YE.7. The functionV:f k j the constraint fkg V is in C(E) gis the least solution of C(E).Proof. Given a -term E, notice that by the isomorphism of Theorem 9,(5) , (6). To show the remaining equivalences, we proceed by proving theimplications:(1) ) (2) ) (3) ) (4) ) (5) ) (7) ) (1)To prove (1) ) (2), suppose C(E) has solution ' 2 Cmap(E). Let f be thefunction S:tA(E;';S) and de ne 2 Tmap(E) by = f '. We will show thatT (E) has solution . We consider each of the constraints in turn. The cases ofthe constraints generated from subterms of the forms 0, succ E, x are immediate,by using Lemma 12. Consider then x:F and the constraint x! [[F ]] [[ x:F ]].By Lemma 12 we get(x)! ([[F ]]) = f(f x:Fg) ([[ x:F ]]) :Consider then GH and the constraint [[G]] [[H]] ! [[GH]]. We know that'([[G]]) Abs(E) so there are two cases. Suppose rst that '([[G]]) = ;. We312 then have ([[G]]) = ?([[H]]) ! ([[GH]]). Consider then the case where'([[G]]) = f x1:E1; : : : ; xn:Eng, for n > 0. We then have that '([[H]]) '([[xi]])and '([[Ei]]) '([[GH]]) for i 2 f1; : : : ; ng. Thus, '([[H]])Tni=1 '([[xi]]) andSni=1 '([[Ei]]) '([[GH]]). So, by Lemma 12,([[G]]) = f('([[G]]))= f( n\i=1'([[xi]]))! f( n[i=1'([[Ei]]))f('([[H]]))! f('([[GH]]))= ([[H]])! ([[GH]])To prove (2)) (3), suppose T (E) has solution 2 Tmap(E). It is su cientto show that T (E) has solution , and this can be proved by straightforwardinduction on the construction of T (E).To prove (3) ) (4), suppose T (E) has solution 2 Tmap(E). De ne ' 2Cmap(E) as follows:'(V ) =8>><>>: ;if ( (V ))( ) = ?fIntgif ( (V ))( ) = IntAbs(E)if ( (V ))( ) = !Abs(E) [ fIntg if ( (V ))( ) = >We will show that C(E) has solution '. To see this, let W Z be a constraintin C(E). If it is of the forms fIntg fIntg or f x:Fg Abs(E), then it issolvable by all functions, including '. For the remaining cases, notice that byTheorem 9, LI(W ) LI(Z) is in T (E) and thus it has solution . This meansthat W Z cannot be of the forms fIntg Abs(E) or f x:Fg fIntg. Supposethen that W Z is of one of the remaining forms, that is, fIntg V , V V 0,V fIntg, f x:Fg V , V Abs(E), where V; V 0 2 XE [ YE. We will treat justthe rst of them, the others are similar. For a constraint of the form fIntg V ,it follows that Int V is in T (E). Since T (E) has solution we get that( (V ))( ) 2 fInt;>g. Thus, '(V ) is either fIntg or Abs(E) [ fIntg, and hencefIntg V has solution '.To prove (4) ) (5), observe that constraints of the forms fIntg Abs(E) orf x:Fg fIntg are not solvable.To prove (5)) (7), suppose C(E) does not contain constraints of the formsfIntg Abs(E) or f x:Fg fIntg. De ne'0 = V:f k j the constraint fkg V is in C(E) gWe proceed in four steps, as follows.First we show that '0 is a solution of C(E). We consider in turn each ofthe seven possible forms of constraints in C(E). Constraints of the forms313 fIntg fIntg and f x:Fg Abs(E) have any solution, including '0. Weare thus left with constraints of the forms fIntg V , V V 0, V fIntg,f x:Fg V , VAbs(E), where V; V 0 2 XE [ YE. We will treat justthe rst three, since case four is similar to case one and since case ve issimilar to case three. For a constraint of the form fIntg V , notice thatInt 2 '0(V ), so the constraint has solution '0. For a constraint of the formV V 0, suppose k 2 '0(V ). Then the constraint fkg V is in C(E), andhence the constraint fkg V 0 is also in C(E). It follows that k 2 '0(V 0).For a constraint of the form V fIntg, suppose it does not have solution'0. Hence, there exist k 2 '0(V ) such that k 6= Int. It follows that theconstraint fkg V is in C(E), and hence the constraint fkg fIntg isalso in C(E), a contradiction.Next we show that '0 is the least solution of C(E). To do this, let ' beany solution of C(E) and suppose V 2 XE [ YE. It is su cient to provethat '0(V ) '(V ). Suppose k 2 '0(V ). Then the constraint fkg V isin C(E). Since ' is a solution of C(E), k 2 '(V ).Next we show that '0 is a solution of C(E). Consider rst the non-conditional constraints of C(E). Since these constraints are also membersof C(E), they have solution '0. Consider then f x:Fg V ) K in C(E)and suppose f x:Fg V has solution '0. Then by the de nition of '0, wehave that f x:Fg V is in C(E), so also K is in C(E), and hence K hassolution '0.Finally we show that '0 is the least solution of C(E). To do this, let ' beany solution of C(E). Then ' is also a solution of C(E), as can be provedby straightforward induction on the construction of C(E). Since '0 is theleast solution of C(E), '0 is smaller than '.To prove (7)) (1), simply notice that sinceC(E) has a solution, it is solvable.2Corollary 14 The type system accepts the same programs as the safety analysis.4.3 AlgorithmsAs corollaries of Theorem 13 we get two cubic time algorithms. Given a -termE, rst observe that both C(E) and T (E) can be computed in time O(n3) wheren is the size of E. We can then easily answer the following two questions:Question (safety): Is E accepted by safety analysis?Algorithm: Check that C(E) does not contain constraint of the formsfIntg Abs(E) or f x:Fg fIntg.314 Question (type inference): Is E typable? If so, what is an annotation ofit?Algorithm: Use the safety checking algorithm. If E turns out to be typable,we get an annotation by rst calculating the two functions'0 = V:f k j the constraint fkg V is in C(E) gand f = S:tA(E;';S)and then forming the composition= f '0 :This function is a solution of T (E).The question of type inference has been open until now. In contrast, it is well-known that ow analysis in the style discussed in this paper can be computed intime O(n3).AcknowledgementsThe authors thank Mitchell Wand for encouragement and helpful discussions.The results of this paper were obtained while the rst author was at NortheasternUniversity, Boston; he is currently hosted by BRICS, Basic Research in ComputerScience, Centre of the Danish National Research Foundation.References[1] Roberto M. Amadio and Luca Cardelli. Subtyping recursive types. ACMTransactions on Programming Languages and Systems, 15(4):575{631, 1993.Also in Proc. POPL'91.[2] Torben Amtoft. Minimal thunki cation. In Proc. WSA'93, pages 218{229,1993.[3] Andrew W. Appel. Compiling with Continuations. Cambridge UniversityPress, 1992.[4] Anders Bondorf. Automatic autoprojection of higher order recursive equa-tions. Science of Computer Programming, 17(1{3):3{34, December 1991.315 [5] Charles Consel. A tour of Schism: A partial evaluation system for higher-order applicative languages. In Proc. PEPM'93, Second ACM SIGPLANSymposium on Partial Evaluation and Semantics-Based Program Manipula-tion, pages 145{154, 1993.[6] Dexter Kozen, Jens Palsberg, and Michael I. Schwartzbach. E cient recur-sive subtyping. Mathematical Structures in Computer Science. To appear.Also in Proc. POPL'93, Twentieth Annual SIGPLAN{SIGACT Symposiumon Principles of Programming Languages, pages 419{428, Charleston, SouthCarolina, January 1993.[7] Dexter Kozen, Jens Palsberg, and Michael I. Schwartzbach. E cient infer-ence of partial types. Journal of Computer and System Sciences, 49(2):306{324, 1994. Also in Proc. FOCS'92, 33rd IEEE Symposium on Foundations ofComputer Science, pages 363{371, Pittsburgh, Pennsylvania, October 1992.[8] Tsung-Min Kuo and Prateek Mishra. Strictness analysis: A new perspectivebased on type inference. In Proc. Conference on Functional ProgrammingLanguages and Computer Architecture, pages 260{272, 1989.[9] Jens Palsberg. Closure analysis in constraint form. ACM Transactions onProgramming Languages and Systems. To appear. Also in Proc. CAAP'94,Colloquium on Trees in Algebra and Programming, Springer-Verlag (LNCS787), pages 276{290, Edinburgh, Scotland, April 1994.[10] Jens Palsberg and Michael I. Schwartzbach. Safety analysis versus typeinference. Information and Computation. To appear.[11] Jens Palsberg and Michael I. Schwartzbach. Safety analysis versus typeinference for partial types. Information Processing Letters, 43:175{180, 1992.[12] Jens Palsberg and Michael I. Schwartzbach. Object-Oriented Type Systems.John Wiley & Sons, 1994.[13] Olin Shivers. Dataow analysis and type recovery in Scheme. In Peter Lee,editor, Topics in Advanced Language Implementation, pages 47{87. MITPress, 1991.[14] Mitchell Wand. Type inference for record concatenation and multiple inher-itance. Information and Computation, 93(1):1{15, 1991.316 Specifying and Verifying Parametric ProcessesWieslaw Pawlowski Pawe l Paczkowskiy Stefan Soko lowskiDecember 1994AbstractA framework in which processes parametrized with other processescan be speci ed, de ned and veri ed is introduced. Higher order process-parameters are allowed. The formalism resembles typed lambda calculusbuilt on top of a process algebra, where speci cations play the role oftypes. A proof system for deriving judgements \parametric process meetsa speci cation" is given.1 IntroductionA typical approach to software development is that of decomposition of largetasks into smaller subtasks. The decomposition principle, when formalized interms of a speci cation system, by which, following [Sokolowski 94], we mean atriple(Constr ; Spec; sat)consisting of a set of constructions Constr (programs, processes, etc.), a set ofspeci cations Spec, and satisfaction relation sat relating constructions to spec-i cations, amounts to the requirement that a task of providing a constructionc that satis es a given speci cation S (denoted c sat S) can be decomposed asfollows:1. provide sub-constructions c1; : : : ; cn that satisfy some sub-speci cationsS1; : : : ; Sn into which the original speci cation S is decomposed, and2. provide a method of combining c1; : : : ; cn so as to obtain such c thatc sat S.In [Sokolowski 94], a point of view was taken, that the method mentionedin point 2 should be a construction itself, i.e. belong to Constr . Adopting thispoint of view we can rephrase 2 asInstitute of Computer Science, Polish Academy of Sciences, ul. Abrahama 18, 81-825Sopot, Poland, e-mail: fW.Pawlowski, [email protected]; supported by ICSPAN, KBN grant PB-1312/P3/92/02 and CRIT IC 1010/II.yDepartment of Computing Science, Chalmers University of Technology, 412-96 Goteborg,Sweden, and Institute of Mathematics, University of Gdansk, ul. Wita Stwosza 57, 80-952Gdansk, Poland, e-mail: [email protected], [email protected]; supportedby KBN grant PB-1312/P3/92/02, CRIT IC 1010/II and ESPRIT BRA CONCUR2.317 20: provide a parametric construction f such thatf sat S1 ! S2 ! ! Sn ! SThe \functional" speci cation S1 ! S2 ! ! Sn ! S, which is assumed tobelong to Spec, is intended to specify precisely those parametric constructionswhich yield objects satisfying S when applied to sub-constructions satisfyingS1; : : : ; Sn. Thus, if the decomposition steps 1 and 20 are realized then fc1 : : : cnde nes c we were looking for.When the sub-constructions are again decomposed into smaller bits theparameters become parametric themselves and we have to deal with higherorder parameters. In [Sokolowski 94] a more elaborate decomposition schemathan the one presented above is considered: dependent products rather thanfunctional speci cations are allowed, but here we will con ne ourselves to asimple setup.The purpose of this work is to apply the methodology sketched above inthe context of concurrent process speci cation and veri cation. As a startingpoint we adopt a process algebra and logic for specifying processes proposed in[Olderog 91]. Olderog extends a process algebra by introducing mixed terms,where a speci cation can become a part of a process term. This enables trans-formation based development of processes [Olderog 91, Olderog 91a]. We con-sider a di erent option and introduce parametric processes that contain processvariables. We allow higher order parametricity: process variables can representalso parametric processes. Technically speaking, Olderogs's process algebra isextended with abstraction and application primitives. This leads to a formalismresembling a typed lambda calculus built on top of a process algebra, wherespeci cations play the role of types. Mixed terms can be viewed as a specialcase of parametric processes.We provide a proof system for deriving judgements \a (parametric) processmeets a speci cation". To this end, we needed to reformulate the proof systemfor nonparametric processes that was given in [Olderog 91]. Derivations inthat proof system use mixed terms. We provide a direct proof system fornonparametric processes, which does not appeal to mixed terms and which isshown to be equivalent to that of [Olderog 91]. The direct proof system isextended to the parametric case.The concept of parametric processes (but only rst order ones) has alreadyappeared in [Milner 89]. However, this method of constructing CCS processes isnot re ected in their veri cation because the bisimulation proof technique usedin this reference requires a considerable e ort in decomposing large veri cationtasks into smaller ones (see e.g. [Larsen and Milner 86]).The formalism we develop allows decomposition steps of the form 1 and 20that involve higher-order processes as parameters and helps one to do a sys-tematic book-keeping of process dependencies, which is useful in higher-orderconstructions. We view our approach as an alternative to process developmentmethods using transformation [Olderog 91] or re nement [Holmstrom 89] tech-niques.The paper is organized as follows. In Section 2 we present a process alge-bra of nonparametric processes and a logic for specifying them that appear in318 [Olderog 91]. A new, direct proof system for nonparametric processes is givenin Section 3. In Section 4 we de ne parametric processes and extend the proofsystem to the parametric case. Section 5 contains an example.2 Nonparametric processes and speci cationsThe class of processes that we consider and a logic for specifying their propertiesare taken from [Olderog 91]. We brie y present the main notions, referring to[Olderog 91] for a comprehensive exposition.2.1 ProcessesLet Comm be an in nite set of communications. Together with a special symbolrepresenting a hidden or internal activity of a process it will constitute theset of actions.In order to de ne a syntactic representation of processes we start with adescription of the set of recursive terms Rec. Let X be a (countable) set of(process) identi ers. We assume that X is partitioned into sets XA of identi erswith alphabet A, where A Comm is nite. In the de nition below P and Qrange over recursive terms, X ranges over identi ers, and A over nite sets ofcommunications. The set Rec is de ned as follows:P ::= stopA (deadlock)j a:P (pre x)j P +Q (choice)j P k Q (parallel composition)j P [b=a] (renaming)j Pna (hiding)j X(identi er)j X:P (recursion)Iterated applications of renaming and hiding will be abbreviated as follows:P [ b=a] will stand for P [b1=a1] : : : [bn=an], where b = b1; : : : ; bn; a = a1; : : : ; anPnA will denote Pna1 : : :nan, where A = fa1; : : :ang.De nition 1 A recursive term P is communication guarded if there exists aset of communications A such that for every recursive subterm X:Q of P , forevery free occurrence of X in Q(1) X lies within a subterm of Q of the form a:R(2) X does not lie within a subterm of Q of the form R[b=a] or Rna, wherea 2 A and b 62 A.If the condition (2) above is dropped, we say that P is action guarded (thisnotion is usually given a separate, slightly simpler de nition).319 To de ne process terms one more notion is needed | a notion of a (com-munication) alphabet . An alphabet of a recursive term P (denoted by (P )) isa set of communications de ned inductively as follows:(stopA) = A(a:P ) = fag [ (P )(P +Q) = (P ) [ (Q)(P k Q) = (P ) [ (Q)(P [b=a]) = ( (P ) fag) [ fbg(Pna) = ( (P ) fag)(X)= A if X 2 XA( X:P ) = (X)[ (P )De nition 2 We say that a recursive term P is a process term (or simply, aprocess) if it satis es the following conditions:P is action guardedevery subterm a:Q of P satis es a 2 (Q)every subterm Q+R of P satis es (Q) = (R)every subterm X:Q of P satis es (X) = (Q)The set of all process terms will be denoted by Proc. A process term is closedif it does not contain free process identi ers. The set of all closed process termswill be denoted by CProc.The intuitive meaning of process terms is as follows:stopA denotes a process which neither engages in any communication norin any internal action.a:P denotes a process which rst communicates a and then behaves like P .P + Q denotes a process which behaves like P or like Q depending onwhether its rst action was an action of P or of Q. If the rst action couldbe performed by P as well as by Q the choice made is nondeterministic.P k Q denotes a parallel composition of P and Q which behaves as P andQ working independently except for synchronization on actions belongingto the intersection of their communication alphabets.P [b=a] denotes a process which behaves like P , but with all actions achanged to b.Pna denotes a process which behaves like P , but with all actions a hidden,i.e. changed to .X:P behaves like P , but with every occurrence of X inside P denotinga \recursive call" to X:P . 320 The above intuition can be made precise in several di erent ways. Onepossible choice is so called readiness semantics. It describes process behaviour interms of pairs (tr;F) where tr is nite sequence of communications called trace,and F , called ready set, is a set of actions in which a process is ready to engageafter \performing" the trace tr. Besides the ready pairs the semantic domaincontains a distinguished element representing initial unstability of a process,and \divergent" ready pairs of the form (tr; ") which represent behaviours wherea process after performing trace tr enters an in nite loop.Thus, the set of process information over A is de ned as:InfoR;A = f g [ A P(A) [ A f"gand the semantic domain of the readiness semantics is de ned as follows:DOMR = [A Comm DOMR;Awhere A is a nite subset of Comm andDOMR;A = f(A; )j InfoR;Ag:The readiness semantics for closed process terms is a function:R [[ ]] : CProc! DOMRwhich can be given an elegant denotational de nition.There is a natural partial ordering on the semantic domain DOMR:(A1; R1) v (A2; R2) if A1 = A2 and R1 R2Intuitively, if R [[P ]] v R [[Q]] then the process denoted by Q is more control-lable than the one denoted by P , i.e. it is more deterministic, less divergentand more stable.2.2 Readiness speci cationsProcesses are speci ed by formulas of many-sorted predicate logic, called readi-ness logic, which has sorts of communications, traces ( nite sequences of com-munications), ready sets ( nite subsets of communications), natural numbersand logical values. In this report we give just a minimal necessary informationon readiness logic. For our purposes it is su cient to remark that readinesslogic allows one to de ne sets of ready pairs in a direct manner and is fairlyexpressive, for example, regular sets over Comm can be expressed in it.A readiness speci cation is a formula of readiness logic, in which at most thedistinguished trace variable h and the distinguished set variable F are free, andwhich satis es some natural syntactic restrictions that guarantee a monotonicityproperty explained below. Since a speci cation has just two free variables h andF , the notion of truth of a speci cation with respect to the evaluation (tr;F)of h and F can be de ned and will be denoted(tr;F) j= S(1)321 If (1) holds we say that S is satis ed by a ready pair (tr;F). Thus, a speci ca-tion S determines a set of ready pairs which satisfy it.Moreover, with every speci cation S its alphabet, denoted by (S), can beassociated. This is a consequence of yet another syntactic restriction on readi-ness formulas that every occurrence of the free variable h lies in the scope ofprojection #A onto some nite set of communications A. What it means seman-tically is that S de nes a set of ready pairs (tr;F), where only the projection oftr onto (S) is de ned and the actions outside (S) do not a ect satis abilityof S by (tr;F).The monotonicity property of readiness speci cations that was mentionedabove is de ned as follows: for an arbitrary readiness speci cation S and forall traces tr and ready sets F and G(tr;F) j= S and F G imply (tr;G) j= SThe set of speci cations will be denoted by Spec. The readiness semanticsof readiness speci cations is de ned as follows:R [[S]] = ( (S); f(tr;F)jtr 2 (S) ^ F (S)^ (tr;F) j= Sg):2.3 Satisfaction relationSince processes as well as speci cations are interpreted semantically over readi-ness domain, the satisfaction relation has a simple semantic de nition:P sat S i R [[P ]] w R [[S]]Intuitively, it means thatThe process P and the speci cation S have the same alphabets i.e. (P ) =(S).Safety condition. If P may engage in a trace tr then there must exist aready set F such that (tr;F) j= S.Liveness condition. After performing a trace tr the process P must engagein one of the ready sets F such that (tr;F) j= S.Stability condition. The process P is stable immediately.2.4 Proof system for sat via mixed termsThe inference system for sat de ned in [Olderog 91] makes use of a notion ofmixed terms . Mixed terms are, as the name suggests, a mixture of readinessspeci cations and process terms and are formally de ned by allowing speci ca-tions in recursive terms P ::= S j stopA j a:P j : : :and generalizing the de nitions of alphabets and processes in a straightforwardway.322 Since processes and speci cations are interpreted in the same semantic do-main, there is no di culty in extending readiness semantics to mixed terms.The satisfaction relation on mixed terms, denoted by >, is de ned asM1 > M2 i R [[M1]] w R [[M2]];where M1, M2 are mixed terms. The relation sat can be viewed as a specialcase of > restricted to the pairs P > S where P is a process and S is aspeci cation.In [Olderog 91] a proof system for > is given, which obviously allows usto derive assertions of the form P sat S, but mixed terms will be used in thederivations. The inference rules of the proof system for > are presented inTable 1. In these rules, P , Q, R range over closed mixed terms, S and T |over speci cations. Curly braces are used to denote syntactic substitutions.3 Proof system for satThe rst step towards introducing parametricity into the formalism proposedby Olderog is to provide a proof system for sat, which does not appeal to mixedterms. In Table 2 we propose such a proof system, whose rules bear a clearresemblance to Olderog's proof system for >. (From now on, the symbol >will refer to the syntactic notion of derivability in the proof system of Table 1.)The judgements of the proof system for sat have the form` P sat S;(2)where P 2 Proc, S 2 Spec and is a set of assumptions,= fX1 sat S1 ; : : : ; Xn sat Sng:The sets of assumptions will be needed to handle recursion. The special caseof (2), with = ; will be shown to coincide with Olderog's notion of P > S.The theorems below show the relative soundness and completeness of theproposed proof system for sat with respect to the proof system for >.To state the theorems we need the following auxiliary notation: for a pro-cess term P and = fX1 sat S1 ; : : : ; Xn sat Sng, by Pf g we shall denotea mixed term PfS1=X1; : : : ; Sn=Xng, obtained from P by replacing free occur-rences of variables X1; : : : ; Xn by speci cations S1; : : : ; Sn.To distinguish the rules of the proof systems for > and for sat, we addindex > to the names of the respective rules of the proof system for >.Theorem 3 (Relative soundness of proof system for sat) If ` P sat Sthen Pf g > S.Proof. Follows by by induction on the length of the derivation of ` X sat S.Note, that some of the assumptions of proof rules in Table 1 are replaced by dif-ferent but equivalent versions in Table 2. Such equivalent pairs of assumptionsare collected in Lemma 4 below. 2 323 (Re exivity)P > P(Transitivity)P > QQ > RP > R(Context)Q > RMfQ=Xg > MfR=Xgwhere M is a mixed term, MfQ=Xg and MfR=Xg are closedmixed terms(Consequence) j= S ) TP > T , where (S) = (T )(Deadlock)stopA > h#A "(Pre x)("; fag) j= Sa:Sfa:h=hg > S , where a 2 (S)(Choice)S + T > '+(S; T )where '+(S; T ) (h#A = " ) S ^ T ) ^ (h#A 6= " ) S _ T )and (S) = (T )(Parallel)S k T > 'k(S; T )where 'k(S; T ) 9G 9H (SfG=Fg ^ TfH=F g^G \H F ^ (G [H) A F )and A = (S) = (T )(Renaming)S[ b=a] > '[ ](S; b; a)where '[ ](S; b; a) 9t 9G (Sft=h;G=F g ^ t[ b=a] = h#A^G[ b=a] F )and A = (S)f b=ag(Hiding)j= SfF B=F g ) T8b 2 B b j= :9F S8tr 2 (S) 9n 0 8b1; : : : ; bn 2 B tr b1 : : : bn j= :9F SS nB > Twhere (T ) = (S) B and (S) = A(Recursion)PfS=Xg > SX:P > S ,where X:P is a communication guarded closed mixed termand (S) = (X) Table 1.324 (Consequence)j= S ) T` P sat S` P sat T , where (S) = (T )(Deadlock)` stopA sat h#A "(Pre x)j= Sf"=h; fag=Fg` P sat Sfah=hg` a:P sat S, where a 2 (S)(Choice)` P sat S` Q sat T` P +Q sat '+(S; T ) , where (S) = (T )(Parallel)` P sat S` Q sat T` P k Q sat 'k(S; T )(Renaming)` P sat S` P [ b=a] sat '[ ](S; b; a)(Hiding)j= SfF B=Fg ) Tj= S ) 8b 2 B h#A 6= bj= S ) 9n8b1 : : : bn 2 B :9F Sfh b1 : : : bn=hg` P sat S` PnB sat T,where (T ) = (S) B and (S) = A(Recursion); X sat S ` P sat S` X:P sat S ,where X:P is communication guarded and (S) = (X)(Assumption); X sat S ` X sat S , where (S) = (X)Table 2.325 Lemma 4(a)j= Sf"=h; fag=Fg if and only if ("; fag) j= S(b)j= S ) 8b 2 B h#A 6= b if and only if 8b 2 B b j= :9F S(c)j= 9n8b1 : : : bn 2 B :9F Sfhb1 : : : bn=hgif and only if8tr 2 (S) 9n 0 8b1; : : : ; bn 2 B tr b1 : : : bn j= :9F SCorollary 5 Proof system for sat is sound, that is,if ` P sat S then R [[S]] v R [[P ]]Proof. Follows from Theorem 3 and soundness of proof system for >. 2The converse of Theorem 3 also holds.Theorem 6 (Relative Completeness) Let P be a process and a set of assump-tions such that Pf g is a closed term. If Pf g > S for a speci cation Sthen ` P sat S.Proof. The proof is long and is omitted due to space limitations. 2We end this section with the following useful observationProposition 7 If for a recursive term P a judgement ` P sat S can bederived for some S and , then P 2 Proc, that is, P satis es the restrictionsof De nition 2.Proof. Follows by induction on the derivation of ` P sat S. 24 Parametric Processes and Speci cationsThe triple(CProc; Spec; sat)can be viewed as a speci cation system, in the sense of [Sokolowski 94]. Welift it to the parametric case by extending the process algebra with abstractionand application primitives, in a manner resembling the typed lambda calculus,where types are replaced with speci cations.First we introduce a syntactic class PSpec of parametric speci cations rangedover by and de ned by the following abstract syntax:::= S j !Intuitively, speci cation 1 ! 2 describes a set of parametric processes, eachof which when given a process satisfying (possibly parametric) speci cation 1as an argument yields a process that satis es speci cation 2.The syntax of recursive terms is extended by adding two new clausesP ::= : : : j X : :P j PP326 We consider the recursive terms up to conversion, where as well as arevariable binding operators.The set of parametric processes will be de ned as a subset of so extendedrecursive terms. In the case of nonparametric processes, syntactic restrictionsenumerated in De nition 2 were used to distinguish the set Proc of processesin the set of (nonparametric) recursive terms. However, as it has been noted inProposition 7, these restrictions are encoded also in proof system for sat. In theparametric case, it is more convenient to adopt a counterpart of Proposition 7 asa de nition rather than separately formulate syntactic restrictions that de nethe desired set of parametric processes. This is the approach we are going totake, but rst we need to re ne some notions.The notions of alphabet of a process and alphabet of a speci cation have tobe extended to the parametric case. So far the alphabets were just nite subsetsof Comm , the set of communications. Alphabets of parametric processes (andspeci cations), we shall call them hyper-alphabets in the sequel, have to be morecomplex objects, indicating the alphabets of possible arguments of a process.We de ne the set H of all hyper-alphabets as follows:H0 = set of nite subsets of CommHn+1 = f(A;B) j A;B 2 Hng [ HnH=S1n=0HnNow, the hyper-alphabet ( ) of a speci cation is de ned recursively by thefollowing clauses: ( ) = (S),if = S( ) = ( ( 1); ( 2)), if = 1 ! 2The hyper-alphabet (P ) of a recursive term P is de ned by the clausesgiven below. If it is not possible to associate an alphabet with a term usingthese de ning clauses, (P ) is considered to be unde ned. Just as in the non-parametric case we assume that every process variable X has an associatedhyper-alphabet (X).(X)= the hyper-alphabet associated with X(stop : A) = A(a :P )= (P ) [ fagif (P ) 2 H0(P +Q)= (P ) [ (Q)if (P ); (Q) 2 H0(PkQ)= (P ) [ (Q)if (P ); (Q) 2 H0(P [b=a])= ( (P ) fag)[ fbg if (P ) 2 H0(Pna)= ( (P ) fag)if (P ) 2 H0( X :P )= (P ) [ (X)if (P ) 2 H0( X : :P ) = ( (X); (P ))if (X) = ( )(PQ)= Bif (P ) = (A;B)and (Q) = ANote that (P ) = (P ) for P 2 Proc and that alphabets of terms such as( X : :P )+Q, where a process algebra operator + s applied to a higher orderterm, are not de ned.327 In the sequel we will consider only those recursive terms, whose hyper-alphabets are de ned, and the notion of recursive terms will refer to such termsfrom now on. The set of so understood recursive terms can be strati ed intotwo subsets: the subset of terms whose alphabet is in H0 and to which processconstructors +, a: , k, n, [ ] and can be applied, and the subset of higher orderterms, on which only abstractions and applications can be performed. Notehowever, that an application PQ, where P is a higher order term, can belongto the former subset and process algebra operators can be applied to it.The syntactic notion of communication guardedness also needs to be re-de ned. Apart from conditions (1) and (2), we add the following point toDe nition 1:(3) X does not lie within a subterm of Q of the form RQNow we can formulate the proof system for the satisfaction relation satextended to the parametric case. The judgements will have the form` P satwhere P is a (parametric) recursive term and the set of assumptions is ex-tended in the obvious way to contain assumptions of the form X sat .The proof system contains all the rules collected in Table 2, which were usedfor nonparametric processes, but this time we assume that S and T range overnonparametric speci cations, P and Q range over parametric recursive terms,is extended as explained above, and the side conditions concerning alphabetsrefer to hyper-alphabets, i.e. is replaced with . The side condition ofcommunication guardedness in Recursion rule refers now to the modi ed versionof this notion.Moreover, the proof system of Table 2 is extended with the following rules,which are essentially -introduction and -elimination rules of the proof systemfor Church typed calculus (see for example [Barendregt 92]), plus a modi edAssumption rule, which replaces the Assumption rule from Table 2.(Assumption); X sat ` X sat , where (X) = ( )(Abstraction); X sat ` P sat` X : :P sat ! , where (X) = ( )(Application)` P sat !` Q sat` PQ satDe nition 8 The set of parametric processes PProc consists of those recursiveterms P , for which ` P sat can be derived for some and . CPProc,the set of closed parametric processes, is de ned as the set of those parametricprocesses that have no free variables. 328 Note that the above de nitions and Proposition 7 imply that Proc PProcand CProc CPProc.We have just de ned all components of a parametric speci cation system(CPProc;PSpec; sat)In this report we do not provide semantical account of parametric processes.This can be done by extending the readiness denotational semantics so thatparametric processes are interpreted as process-valued functions on processes.Such a construction, which is an adaptation of a general procedure describedin [Sokolowski 94], will be reported separately together with soundness proof ofthe proof system for sat . Here, we just show that the proof system respectsreductions.Since parametric processes can be seen as lambda terms built on top of non-parametric processes, the notion of -reduction can be de ned in a standardmanner:( X : :P )Q !PfQ=XgWe will use the same arrow ! to denote -reducibility in possibly many steps.The next proposition ensures that sat is preserved by reductions.Proposition 9 If ` P sat and P !P 0 then ` P 0 sat .Proof. By induction on the derivation of ` P sat . 2Corollary 10 If P 2 PProc and P !P 0 then P 0 2 PProc.In [Sokolowski 94] a special case of shallow parametricity is distinguished.A shallow n-parameter process has the formX1 : S1 : : : Xn : Sn : P(3)This class of parametric processes corresponds to mixed terms of [Olderog 91].The process term shown above can be syntactically translated into a mixedtermPfS1=X1 ; : : : ; Sn=XngConversely, a mixed term P can be translated into a process having the form(3) by replacing every occurrence of any speci cation Si appearing in P with afresh variable Xi.The following proposition shows that there is an exact match between theproof systems for > on mixed terms and the proof system for sat on therestricted class of shallowly parametric processes.Proposition 11 ` X1 : S1 : : : Xn : Sn : P sat S1 ! ! Sn ! S if andonly if PfS1=X1 ; : : : ; Sn=Xng > S.Proof. It can be show by induction that` X1 : S1 : : : Xn : Sn : P sat S1 ! ! Sn ! Sif and only ifS1 sat X1 ; : : : ; Sn sat Xn ` P sat S:By Theorems 3 and 6 this is equivalent to PfS1=X1 ; : : : ; Sn=Xng > S. 2329 5 ExampleWe present here small example to illustrate the use of the proposed formalism:a task of constructing a counter of capacity 2. CNT2 below is a readinessspeci cation of such a counter. Upon communication up (dn) the value storedin the counter is increased (decreased) by 1. The symbol up#h denotes thenumber of up communications in trace h.CNT2(0 up#h dn#h 2)^ (up#h dn#h < 2 ) up 2 F )^(up#h dn#h > 0 ) dn 2 F )The construction of process cnt2 such that` cnt2 sat CNT2can be decomposed into constructing a counter of capacity 1, cnt1, and a para-metric process mk-cnt2 that constructs a counter of capacity 2 given a counterof capacity 1:` mk-cnt2 sat CNT1! CNT2` cnt1 sat CNT1CNT1 is a speci cation of a counter of capacity 1:CNT1(0 up#h dn#h 1)^ (up#h dn#h = 0 ) up 2 F )^(up#h dn#h = 1 ) dn 2 F )Finally, the processes that realize the speci cations can be de ned as follows:mk-cnt2X : CNT1 : (X [lk=up]kX [lk=dn])nlkcnt1X :up:dn:Xcnt2mk-cnt2 cnt1Using the proof system for parametric processes we can derive` cnt2 sat CNT2as well as the remaining satisfaction assertions above.AcknowledgementWe would like to thank Thorsten Altenkirch, Karlis Cernas and Uno Holmerfor helpful comments on this work.References[Barendregt 92] H.P. Barendregt, Lambda Calculi with Types, in: Handbook ofLogic in Computer Science, vol. 2 (S. Abramsky, Dov M. Gabbay, T. S. E.Maibaum, Eds.) pp. 117{309, Calderon Press, Oxford 1992.330 [Holmstrom 89] S. Holmstrom, A re nement calculus for speci cation inHennessy-Milner logic with recursion, Formal Aspects of Computing, vol.1 (3). pp. 242{272, 1989.[Larsen and Milner 86] K. Larsen and R. Milner, A Complete Protocol Veri ca-tion using Relativised Bisimulations, R 86-12, Institute of Electrical Systems,Aalborg University Centre, 1986.[Milner 89] R. Milner, Communication and Concurency, Prentice Hall, 1989.[Olderog 91] E.-R. Olderog, Nets, Terms and Formulas, Cambridge UniversityPress, Cambridge 1991.[Olderog 91a] E.-R. Olderog, Towards a design calculus for communicating pro-grams, in: Proc. CONCUR '91, LNCS, Springer, 1991.[Sokolowski 94] S. Sokolowski, The GDM approach to speci cations and theirrealizations. Part I: Speci cation systems. Technical Report, Gdansk, 1994.331 On the Formal Derivation of a FEAL MicroprocessorR. Ruks_enasK. SereyY. ZhaoAbstractWe present an outline of a method for formal derivation of asyn-chronous VLSI circuits. The proposed method focuses on transformationalstyle of the design and it uses techniques familiar from the constructionof parallel programs. Re nement calculus and action systems are usedas a framework for the design process. As a case study we look at thederivation of an asynchronous encryption/decryption microprocessor.1 IntroductionThe paper describes ongoing work on exploring a methodology for formal deriva-tion of asynchronous delay-insensitive VLSI circuits within the re nement cal-culus and the action system framework. It is aimed to be used in the designof application-speci c circuits. The basic idea is to apply techniques familiarfrom the construction of parallel programs to VLSI design. This approach wasoriginally taken by Martin [9] who has developed a methodology for designingasynchronous VLSI circuits as concurrent programs within the CSP-framework.Using his method he has speci ed and derived a number of nontrivial circuits.Delay-insensitive, asynchronous chips were then produced from the concurrentprograms that can be seen as equivalent to the parallel composition of actionsystems that we derive here. However, the method of Martin is semiformaland the transformations carried out during the design process are not formallyproved to be correct. We want to provide a completely formal basis for Martin'smethod.An action system is a parallel or distributed program where parallel activ-ity is described in terms of events, so called actions. Several actions can beexecuted in parallel, as long as the actions do not share any variables. A re-cent extension of the action system framework, adding procedure declarationsto action systems [5], gives us a very general mechanism for communicationbetween action systems. The action systems formalism was proposed by Backand Kurki-Suonio [3].The re nement calculus is a formalization of the stepwise re nement methodof program construction. It was originally proposed by Back [1] and it has beenlater studied and extended by several researchers, see [11, 12] among others.Abo Akademi University, Department of Computer Science, FIN-20520 Turku, Finland,emails: frruksena,[email protected] of Kuopio, Department of Computer Science and Applied Mathematics, FIN-70211 Kuopio, Finland, email: [email protected], fax: +358-71-162595332 Originally, the re nement calculus was designed as a framework for systematicderivation of sequential programs only. Back and Sere [4, 13] extended there nement calculus for the design of action systems and hence, it was possibleto handle parallel algorithms within the calculus. Back [2] made yet anotherextension to the calculus showing how reactive programs could be derived in astepwise manner within it relying heavily on the work done in data re nement.Action systems and the re nement calculus approach has already proved tobe suitable for VLSI circuit speci cation and design on logical behaviour level[6]. Now we want to explore its applicability in the design on rather low (signal)level.The re nement calculus has been also formalised within a mechanical the-orem prover by Back and von Wright [7]. This gives us the tool to mechanisethe design process and prove correctness of the derived circuit. A somewhatrelated method and formalism is developed in [14], but the emphasis is put onthe veri cation of and formal models for delay-insensitive circuits.As a case study we look at the design of an asynchronous FEAL (Fast En-cipherment Algorithm) processor [10]. We start from the sequential algorithmwhich describes logical behaviour of the encryption processor. Our goal is toidentify and isolate the basic functional components of the circuit into actionsystems of their own. The component action systems are joint together in aparallel composition, where they interact with each other using remote proce-dure calls. Synchronous communication between di erent componnets of thecircuit is achieved through the procedures mechanism [6].Overview of the paper In section 2, we brie y describe the action systemsformalism. In section 3, we describe how action systems are composed intoparallel systems. We also brie y mention the re nement calculus. In section 4,we describe the the target processor and give an initial speci cation for it asa sequential program. In section 5, this speci cation is stepwise turned intoa parallel composition of action systems, where each action system representsone basic functional component of the circuit. In section 6, we introduce realparallelism into our system by creating 3-level pipeline. Finally in section 7, weconclude with some remarks on the proposed method and further steps in thederivation process.2 Action systemsAn action system (with procedures) is a statement of the formA :: var v ; proc wj[ var x1; : : : ; xh := x10; : : : ; xh0; proc p1 = P1; : : : ; pn = Pn ;do A1 [] : : : [] Am od]j: zThe identi ers x1; : : : ; xh are the variables declared in A and initialized tox10; : : : ; xh0, p1; : : : ; pn are the procedure headers, and Pi is the procedure bodyof pi , i = 1; : : : ; n. Within the loop, A1; : : : ;Am are the actions of A. Finally,333 z ; v and w are pairwise distinct lists of identi ers. The list z is the import list,indicating which variables and procedures are referenced, but not declared in A.The lists v and w are the export lists, indicating which variables and proceduresdeclared in A are accessible from other action systems. Procedure bodies andactions can be arbitrary statements, and may contain procedure calls.The guard of a program statement S is the condition gS , de ned bygS = :wp(S ; false)where wp is the weakest precondition predicate transformer of Dijkstra [8]. Bothprocedure bodies and actions will in general be guarded commands, i.e., state-ments of the formC = g ! S ;where g is a boolean condition and S is a program statement. In this case, theguard of C is g ^ :wp(S ; false).Enabledness of an action Let p = (b ! T ) be a procedure and let a ! S ; pbe an action that calls on p. Then the enabledness of this action is determinedby the value of the action guardg(a ! S ; p) = a ^ g(S ; p)If a procedure or action contains a call to a procedure that is not declaredin the action system, then the behavior of the action system will depend on theway in which the procedures are declared in some other action system, whichconstitutes the environment of the action system as will be described below.3 Composing and re ning action systemsConsider two action systems,A :: var v ; proc rj[ var x := x0; proc p1 = P1; : : : ; pn = Pn ;do A1 [] : : : [] Am od]j: zandB :: var w ; proc sj[ var y := y0; proc q1 = Q1; : : : ; ql = Ql ;do B1 [] : : : [] Bk od]j: uwhere x \ y = ;, v \ w = ;, and r \ s = ;. Furthermore, the local proceduresdeclared in the two action systems are required to be distinct.We de ne the parallel composition A k B of A and B to be the action systemC334 C :: var b; proc cj[ var x ; y := x0; y0;proc p1 = P1; : : : ; pn = Pn ; q1 = Q1; : : : ; ql = Ql ;do A1 [] : : : [] Am [] B1 [] : : : [] Bk od]j: awhere a = z [ u (v [ r [ w [ s); b = v [ w ; c = r [ s .Thus, parallel composition will combine the state spaces of the two con-stituent action systems, merging the global variables and global proceduresand keeping the local variables distinct. The imported identi ers denote thoseglobal variables and/or procedures that are not declared in either A or B. Theexported identi ers are the variables and/or procedures declared global in Aor B. The procedure declarations and the actions in the parallel compositionconsists of the procedure declarations and and actions in the original systems.Enabledness We permit procedure bodies to have guards that are not identi-cally true. Hence, it is possible that an action which is enabled, calls a procedurein another action system, which then turns out not to be enabled in the state inwhich it is called. This situation is then the same as if the calling action had notbeen enabled at all, and had therefore never initiated the call. In other words,the enabledness of an action is determined by the enabledness of the wholestatement that is invoked when the action is executed, including enablednessof all procedures that might be invoked.Decomposing action systems Given an action systemC :: var u; proc sj[ var v := v0;do C1 [] : : : [] Cn od]j: zwe can decompose it into smaller action systems by parallel composition. Thismeans that we split the variables, actions and procedures of C into disjoint setsso thatC = var u; proc sj[ var w := w0; proc r = R;A k B]j: zwhereA :: var a2; proc a3j[ var x := x0; proc p = P ;do A1 [] : : : [] Am od]j: a1B :: var b2; proc b3j[ var y := y0; proc q = Q ;do B1 [] : : : [] Bk od]j: b1335 The reactive components A and B interact with each other via the global vari-ables and procedures included in the lists a2; a3; b2; b3.Hiding and revealing Letvar v1; v2; proc v3; v4 A : zbe an action system of the form above, where z denotes the import list andv1; v2; v3; v4 denote the export lists. We can hide some of the exported globalvariables (v2) and procedure names (v4) by removing them from the export list,A0 = var v1; proc v3 A : z :Hiding the variables v2 and procedure names v4 makes them inaccessible fromother actions outside A0 in a parallel composition. Hiding thus has an e ectonly on the variables and procedures in the export list. The opposite operation,revealing, is also useful.In connection with the parallel composition below we will follow the follow-ing convention. Let var a1; proc a2 A : a3 and var b1; proc b2 B : b3 betwo action systems. Then their parallel composition is the action systemvar a1 [ b1; proc a2 [ b2 A k B : cwhere c = a3 [ b3 (a1 [ a2[ b1 [ b2) according to the de nition above. Hence,the parallel composition exports all the variables and procedures exported byeither A or B. Sometimes there is no need to export all these identi ers, i.e.,when they are exclusively accessed by the two component action systems Aand B. This e ect is achieved with the following construct that turns out to beextremely useful later: var v ; proc p j[ A k B ]j : cHere the identi ers v and p are as follows: v a1 [ b1 and p a2 [ b2.Re ning action systems Most of the steps we will carry out within themicroprocessor derivation are purely syntactic decomposition steps. There are,however, a couple of steps where a higher level action system is re ned intoan other action system. These steps are formally carried out within the re ne-ment calculus, where we consider action systems as ordinary statements, i.e.,as initialized iteration statements.The re nement calculus is based on the following de nition. Let S and S 0be two statements. Then S is correctly re ned by S 0, denoted S S 0, if forany postcondition Qwp(S ;Q)) wp(S 0;Q):336 4 FEAL algorithm and initial speci cation of theprocessorAlgorithm As a case study we look at the design of an asynchronous en-cryption/decryption (FEAL) processor that implements a special case of the socalled FEAL-N algorithm [10]. The algorithm takes plaintext (64-bit word) asan input and produces encrypted text of the same length. While encrypting ituses 16-bit keys that are computed from the original one given by the user. Wehere assume that there are available (when needed) N + 8 keys in the so calledkey memory.Encryption of input word T is given by the following set of equations:L0:R 1 = KN ::N+3 T(1)R0 = R 1 L0(2)for i 2 1::N :Ri = Li 1 F (Ri 1;Ki 1)(3)Li = Ri 1(4)LN+1 = RN LN(5)C = KN+4::N+7 RN :LN+1(6)Here C stands for the output (encrypted text). A concatenation of two wordsW1 and W2 is writen as W1:W2, while exclusive-or (for W1, W2 of the equallength) is denoted as W1 W2. The word under encryption is then representedas a concatenation L:R of the left and right part (both 32-bit long). Further-more, let Km::n be short for Km :Km+1:::Kn with Kj being j -th key. We assumethat function F is available and do not consider its implementation. Decryptionalgorithm has the same structure only the keys are used in di erent order.The parameter N is the number of rounds in the main encryption loop(Equations 3, 4). We consider the case with N = 8. Some extra processing ofthe word is done before the loop (Equations 1, 2) and after (Equations 5, 6) it.So the algorithm consists of three logical parts.Structure of the microprocessor We make the design decision to imple-ment each logical part of the algorithm by a separate functional componentin the target microprocessor. When processing one word the components areactivated one after the other without any possibility for simultaneous activity.However, the processor is supposed to work with a sequence of words. In thiscase concurrent activity of all its parts is possible. The solution is to have a3-level pipeline system. To keep input of the second and third components un-changed until the processing of data has been completed and result passed tothe next component we introduce two registers (see Figure 1). All componentsof the circuit communicates through the indicated channels.337 ---SSSSSSSS??????????-K8 11K12 15K0 7TLiLiLoLoRiRoRoRiECOYEMMYRREG1DATAINKCALCREG2DATAOUTFigure 1: Structure of the microprocessor338 We start from the sequential action system that implements above algorithmin the procedure FEAL:F0:: proc FEALj[ var Li ,Ri ; Ro ,Lo ; T ;proc FEAL(t)=(T :=t;j[var Ri ,Lo,L[j],R[j] for j=0 ,: : :,8 ;Li :Ri := K8..11 T ;Ri := Li Ri ;L[0 ],R[0]:= Li ,Ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;Ro ,Lo :=R[8] ; L[8 ] ;Lo := Ro Lo ;C := K12..15 Ro :Lo]j )]j:C ; KIn our target program each component of the processor is represented asan action system of its own. Parallel composition of them models the entiresystem where all elements can be active at the same time. Communication onthe channels (synchronization of the action systems) is expressed as a remoteprocedure calls. We do not consider a derivation of any particular componentof the processor here, instead we look more carefully at the task of isolating thebasic functional parts and modelling communication through the channels onthe low (signal) level.5 Decomposition into parallel action systemsAs described above, the algorithm consists of three parts. We therefore startby making three procedures to represent them:F1:: proc FEALj[ var Li ,Ri ; Ro ,Lo ; T ;proc INXOR(var li ,ri)=(j[ var Ri ; li :Ri := K8..11 T ; ri := li Ri ]j) ;proc CIPHER(li,ri ,var ro ,lo)=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= li ,ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;ro ; lo :=R[8] ; L[8 ]]j ) ;proc OUTXOR(ro ; lo)=(j[ var Lo ; Lo := ro lo ; C := K12..15 ro :Lo ]j) ;proc FEAL(t)=(T := t ; INXOR(Li ,Ri); CIPHER(Li,Ri ,Ro ,Lo); OUTXOR(Ro,Lo))]j:C ; K339 We have that F0 F1.Now the encryption itself is done in procedure CIPHER, while INXORperforms preparatory processing of the word and OUTXOR ends whole process.Parallelize F There is no parallel activity possible in the above action system.Hence, we proceed by creating a parallel composition of action systems whichcontains actions that can be executed simultaneously.First we split the procedure FEAL into three parts by creating new proce-dures DIN , CALC and DOUT . DIN isolates procedure INXOR from the restwhile CALC and DOUT separate procedures CIPHER and OUTXOR. Moreformally this can be represented as the following sequence of re nements:(T := t ; INXOR(Li ,Ri); CIPHER(Li,Ri ,Ro ,Lo); OUTXOR(Ro,Lo))proc DIN (li ,ri)=(INXOR(li,ri) ; CIPHER(Li ,Ri ,Ro ,Lo); OUTXOR(Ro,Lo)) ;(T := t ; DIN (Li ,Ri))proc DIN (li ,ri)=(INXOR (li ,ri) ; CALC(li ,ri)) ;proc CALC (li,ri)=(CIPHER(li,ri ,Ro ,Lo) ; OUTXOR(Ro ,Lo)) ;(T := t ; DIN (Li ,Ri))proc DIN (li ,ri)=(INXOR(li,ri) ; CALC(li ,ri)) ;proc CALC (li,ri)=(CIPHER(li,ri ,Ro ,Lo) ; DOUT(Ro ,Lo)) ;proc DOUT (ro ; lo)=(OUTXOR(ro ; lo)) ;(T := t ; DIN (Li ,Ri))Now a new action system is as below:F2:: proc FEALj[ var Li ,Ri ; Ro ,Lo ; T ;proc INXOR(var li ,ri)=(j[ var Ri ; li :Ri := K8..11 T ; ri := li Ri ]j) ;proc CIPHER(li,ri ,var ro ,lo)=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= li ,ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;ro ; lo :=R[8] ; L[8 ]]j ) ;proc OUTXOR(ro ; lo)=(j[ var Lo ; Lo := ro lo ; C := K12..15 ro :Lo ]j) ;proc DIN (li ,ri)=(INXOR(li,ri) ; CALC(li ,ri)) ;proc CALC (li,ri)=(CIPHER(li,ri ,Ro ,Lo) ; DOUT (Ro ,Lo)) ;proc DOUT (ro ; lo)=(OUTXOR(ro ; lo));proc FEAL(t)=(T := t ; DIN (Li ,Ri))]j:C ; KWe have also proved the re nement relation F1 F2.Next we can relax the atomicity constraints as follows. First we add andinitialize two new variables in and ca:340 F3:: proc FEALj[ var Li ,Ri ; Ro ,Lo ; T ; ic ; co 2 boolean ;proc INXOR(var li ,ri)=(j[ var Ri ; li :Ri := K8..11 T ; ri := li Ri ]j) ;proc CIPHER(li,ri ,var ro ,lo)=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= li ,ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;ro ; lo :=R[8] ; L[8 ]]j ) ;proc OUTXOR(ro ; lo)=(j[ var Lo ; Lo := ro lo ; C := K12..15 ro :Lo ]j) ;proc DIN (li ,ri)=( :in ! INXOR(li ,ri) ; in:= true ) ;proc CALC (li,ri)=( :ca ! CIPHER(li ,ri ,Ro ,Lo) ; ca:= true ) ;proc DOUT (ro ; lo)=(OUTXOR(ro ; lo)) ;proc FEAL(t)=(T := t ; DIN (Li ,Ri)) ;in ; ca :=false ; false ;do in ! CALC(li ,ri) ; in:= false[] ca ! DOUT(Ro ,Lo) ; ca := falseod]j:K ; CWe have that F2 F3.BothDIN and CALC can be viewed as consisting of two parts. The rst oneexecute procedures INXOR and CIPHER correspondingly. Then the control isreturned to the caller. Later on the procedures CALC and DOUT are calledfrom separate actions that can be enabled simultaneously. The e ect of thesere nements is as if we would have introduced an explicit return statement intoour language:F4:: proc FEALj[ var Li ,Ri ; Ro ,Lo ; T ;proc INXOR(var li ,ri)=(j[ var Ri ; li :Ri := K8..11 T ; ri := li Ri ]j) ;proc CIPHER(li,ri ,var ro ,lo)=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= li ,ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;ro ; lo :=R[8] ; L[8 ]]j ) ;proc OUTXOR(ro ; lo)=(j[ var Lo ; Lo := ro lo ; C := K12..15 ro :Lo ]j) ;proc DIN (li ,ri)=(INXOR(li,ri) ; return ; CALC(li ,ri)) ;proc CALC (li,ri)=(CIPHER(li,ri ,Ro ,Lo) ; return ; DOUT (Ro ,Lo)) ;proc DOUT (ro ; lo)=(OUTXOR(ro,lo));proc FEAL(t)=(T := t ; DIN (Li ,Ri))]j:C ; KHence we have that F3 = F4.341 The e ect of carrying out F3 is the same as that of F4. The return statementis merely syntactic sugaring.Separation of DATAIN, CALC and DATAOUT Let us now decomposethe action system F4 into a parallel composition of three action systemsDatain,Calc and Dataout : F4 Datain k Calc k Dataoutwhere new action systems are de ned as below:Datain:: proc FEALj[ var Li ,Ri ; T ;proc INXOR=(j[ var Ri ; Li :Ri := K8..11 T ; Ri := Li Ri ]j) ;proc DIN (li ,ri)=(Li ,Ri := li ,ri ; INXOR ; return ; CALC(Li ,Ri));proc FEAL(t)=(T := t ; DIN (Li ,Ri))]j:K ; CALCCalc:: proc CALCj[ var Li ,Ri ,Ro ,Lo ;proc CIPHER=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= Li ,Ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;Ro ,Lo :=R[8] ; L[8 ]]j ) ;proc CALC (li,ri)=(Li ,Ri := li ,ri ; CIPHER ; return ; DOUT (Ro ; Lo))]j:K ; DOUTDataout :: proc DOUTj[ var Ro ; Lo ;proc OUTXOR=(j[ var Lo ; Lo := Ro Lo ; C := K12..15 Ro :Lo ]j) ;proc DOUT (ro ; lo)=(Ro ; Lo := ro ; lo ; OUTXOR)]j:C ; KNow the systems communicate through the global procedures CALC andDOUT . We have also removed the parameters (li ,ri) and (ro ,lo) from proce-dures INXOR, CIPHER and OUTXOR and added some extra assignments intoprocedures DIN , CALC and DOUT to play the role of the now unnecessaryparameters.6 RegistersFrom three action systems above only Datain and Dataout can be executedin parallel. To make all components executable simultaneously, we add tworegisters between the action systems. New procedures REG1 and REG2 are342 created to keep intermediate results of the computation unchanged while theyare used by the next component.First we re ne Datain to Datain 0:(Li ,Ri := li ,ri ; INXOR ; return ; CALC(Li ,Ri))proc REG1 (li ,ri)=(j[ var Li ,Ri ; Li ,Ri := li ,ri ; return ; CALC(Li ,Ri) ]j) ;(Li ,Ri := li ,ri ; INXOR ; return ; REG1 (Li ,Ri))Datain 0:: proc FEALj[ var Li ,Ri ; T ;proc INXOR=(j[ var Ri ; Li :Ri := K8..11 T ; Ri := Li Ri ]j) ;proc FEAL(t)=(T := t ; DIN (Li ,Ri)) ;proc DIN (li ,ri)=(Li ,Ri := li ,ri ; INXOR ; return ; REG1 (Li ,Ri)) ;proc REG1 (li ,ri)=(j[ var Li ,Ri ; Li ,Ri := li ,ri ; return ; CALC(Li ,Ri) ]j);proc FEAL(t)=(T := t ; DIN (Li ,Ri))]j:K ; CALCThen Calc is re ned by Calc 0:(Li ,Ri := li ,ri ; CIPHER ; return ; DOUT (Ro ; Lo))proc REG2 (ro ,lo)=(j[ var Ro ,Lo ; Ro ,Lo:= Ro ,Lo ; return ; DOUT(Ro ,Lo) ]j) ;(Li ,Ri := li ,ri ; CIPHER ; return ; REG2 (Ro ; Lo))Calc 0:: proc CALCj[ var Li ,Ri ; Ro ,Lo ;proc CIPHER=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= Li ,Ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;Ro ,Lo :=R[8] ; L[8 ]]j ) ;proc CALC (li,ri)=(Li ,Ri := li ,ri ; CIPHER ; return ; REG2 (Ro ; Lo)) ;proc REG2 (ro ,lo)=(j[ var Ro ,Lo ; Ro ,Lo:= Ro ,Lo ; return ; DOUT(Ro ,Lo) ]j)]j:K ; DOUTFinally, we split Datain 0 into two components Datain 00 and Reg1 as follows:Datain 00:: proc FEALj[ var Li ,Ri ; T ;proc INXOR=(j[ var Ri ; Li :Ri := K8..11 T ; Ri := Li Ri ]j) ;proc DIN (li ,ri)=(Li ,Ri := li ,ri ; INXOR ; return ; REG1 (Li ,Ri));proc FEAL(t)=(T := t ; DIN (Li ,Ri))]j:K ; REG1343 Reg1:: proc REG1j[ proc REG1 (li ,ri)=(j[ var Li ,Ri ; Li ,Ri := li ,ri ; return ; CALC(Li ,Ri) ]j)]j:K ; CALCSimilarly, Calc 0 is divided into Calc 00 and Reg2:Calc 00:: proc CALCj[ var Li ,Ri ; Ro ,Lo ;proc CIPHER=(j[var L[j],R[j] for j=0 ,: : :,8 ;L[0 ],R[0]:= Li ,Ri ;[ L[j],R[j]:= R[j-1] ; L[j 1 ] F (R[j-1],K0..7[16j-15..16j]) ]for j=0 ,: : :,8 ;Ro ,Lo :=R[8] ; L[8 ]]j ) ;proc CALC (li,ri)=(Li ,Ri := li ,ri ; CIPHER ; return ; REG2 (Ro ; Lo))]j:K ; REG2Reg2:: proc REG2j[ proc REG2 (ro ,lo)=(j[ var Ro ,Lo ; Ro ,Lo:= Ro ,Lo ; return ; DOUT(Ro ,Lo) ]j)]j:K ; DOUTAs a result, we have derived the action system that models the processor inFigure 1:F5:: proc FEAL j[Datain 00 k Reg1 k Calc 00k Reg2 k Dataout ]j: C ; KWe have also established the re nement relation between initial and nal pro-grams:F0 F57 Concluding remarksThe nal parallel composition of action systems is a re nement of the initialsequential program. We isolated so far the basic functional components of theprocessor and introduced two registers thus creating 3-level pipeline system.All manipulations with the program were completely formal and based on there nement calculus. To achieve this result we mostly used the method of paralleldecomposition of an action system. Communication through the channels wasmodelled by remote procedure calls. Some steps involved a re nement of theatomicity of the system so they were true re nements and required proofs.Next step in the derivation would be a re nement of the communicationbetween action systems (components of the processor). We intend to implementit by so called handshaking protocol. Then an isolation of control and data partsof the circuit could follow.344 References[1] R. J. R. Back. On the Correctness of Re nement Steps in Program De-velopment. PhD thesis, Department of Computer Science, University ofHelsinki, Helsinki, Finland, 1978. Report A{1978{4.[2] R. J. R. Back. Re nement calculus, part II: Parallel and reactive pro-grams. In J. W. de Bakker, W.{P. de Roever, and G. Rozenberg, editors,Stepwise Re nement of Distributed Systems: Models, Formalisms, Correct-ness. Proceedings. 1989, volume 430 of Lecture Notes in Computer Science.Springer{Verlag, 1990.[3] R. J. R. Back and R. Kurki-Suonio. Decentralization of process nets withcentralized control. In Proc. of the 2nd ACM SIGACT{SIGOPS Symp. onPrinciples of Distributed Computing, pages 131{142, 1983.[4] R. J. R. Back and K. Sere. Stepwise re nement of parallel algorithms.Science of Computer Programming 13, pages 133{180, 1989.[5] R. J. R. Back and K. Sere. Action systems with synchronous communica-tion. In E.-R. Olderog, editor, Programming Concepts, Methods and Cal-culi. Proceedings of the IFIP TC2/WG2.1/WG2.2/WG2.3 Working Con-ference, pages 107{126, 1994.[6] R. J. R. Back and K. Sere. Speci cation of a Microprocessor. Reports oncomputer science and mathematics 148, Abo Akademi, 1994.[7] R. J. R. Back and J. von Wright. Re nement concepts formalised in higher-order logic. Formal Aspects of Computing 2, pages 247{272, 1990.[8] E. W. Dijkstra. A Discipline of Programming. Prentice-Hall International,1976.[9] A. J. Martin. Synthesis of Asynchronous VLSI Circuits. CalTech, TechnicalReport, 1993.[10] S. Miyaguchi and A. Shimizu. Fast Data Encipherment Algorithm. EU-ROCRYPT '87, pages 267{278, 1987.[11] C. C. Morgan. The speci cation statement. ACM Transactions on Pro-gramming Languages and Systems, 10(3):403{419, July 1988.[12] J. M. Morris. A theoretical basis for stepwise re nement and the program-ming calculus. Science of Computer Programming, 9:287{306, 1987.[13] K. Sere Stepwise Re nement of Parallel Algorithms. PhD thesis, De-partment of Computer Science, Abo Akademi University, Turku, Finland,1990.[14] J. Staunstrup and M. R. Greenstreet. Synchronized Transitions. IFIP WG10.5, Summer school on Formal Methods for VLSI Design, Lecture Notes1990.345 Strictness and Totality AnalysisKirsten Lackner SolbergHanne Riis Nielson and Flemming NielsonComputer Science Dept.Aarhus University, Denmarke-mail: fkls,hrn,[email protected] full version of the paper can be found in Proceedings of SAS'94 , LNCS864 pages 408 { 422, 1994Strictness analysis has proved useful in the implementation of lazy functionallanguages as Miranda, Lazy ML and Haskell: when a function is strict it is safeto evaluate its argument before performing the function call. Totality analysis isequally useful but has not be adopted so widely: if the argument to a function isknown to terminate then it is safe to evaluate it before performing the functioncall.In this talk we present an inference system for performing strictness andtotality analysis. We restrict our attention to a simply typed lambda-calculuswith constants and a xpoint operator. The inference system is an extensionof the usual type system in that we introduce three annotations on types t:!bt: the value has type t and it de nitely ?,!nt: the value has type t and is de nitely not ?, and!>t: the value has type t and it can be any value.Annotated types can be constructed using the function type constructor and(top-level) conjunction. As an example a function may have the annotated type(!nInt ! !nInt) ^ (!bInt ! !bInt) which means that given a terminatingargument the function will de nitely terminate and given a non-terminatingargument it will de nitely not terminate. Thus we capture the strictness aswell as the totality of the function. Strictness and totality information can alsobe combined as in (!bInt ! !nInt ! !nInt) ^ (!nInt ! !bInt ! !nInt)^ (!bInt ! !bInt ! !bInt) which will be the annotated type of McCarthy'sambiguity operator.We give examples of its use and prove the correctness with respect to anatural-style operational semantics.Dept. of Math. and Computer Science, Odense University, Denmark346 Backward Re nement for Verifying DistributedAlgorithmsK. SereM. WaldenWe present a new veri cation method for distributed algorithms. The basic idea isthat an algorithm to be veri ed is stepwise transformed into a high level speci cationthrough a number of correctness-preserving steps. At each step some mechanismof the algorithm is identi ed and abstracted away while the basic computation inthe original algorithm is preserved. In this way the algorithm becomes more coarse-grained. Only the essential parts of the algorithm are then left for nal veri cation.The method is formalized within the re nement calculus [1] using superpositionre nement [2] in a backward direction.The idea is as follows. We verify an algorithm through a number of backwardre nement steps. Each step can be veri ed within the re nement calculus using thesuperposition re nement rule. The correctness of the nal algorithm is then easilyveri ed, thereby establishing the correctness of the original algorithm. An extensivecase study is described in [4]. An additional contribution of the backward re nementmethod is that the algorithmwill be described as consisting of some basic computationand a number of mechanisms added on top of this.Our method is closely related to the reduction method of Lipton [3]. In contrastto Lipton, the method presented here is based on a formal calculus, the re nementcalculus, for reasoning about programs. The main purpose of the re nement calculusis to provide a basis for the stepwise re nement approach to program construction.Our work shows how this calculus can be used to verify an algorithm.References[1] R. J. R. Back. On the correctness of re nement in program development. Ph.D.thesis, Report A-1978-4, Department of computer science, University of Helsinki,Finland, 1978.[2] R. J. R. Back and K. Sere. Superposition re nement of Reactive Systems. SeriesA{144, Reports on Computer Science and Mathematics, Abo Akademi Univer-sity, Finland, 1993.[3] R. J. Lipton. Reduction: A Method of Proving Properties of Parallel Programs.Communications of th ACM 18, No 12, pages 717{721, 1975.[4] K. Sere and M. Wald en. Veri cation of a Distributed Algorithm due to Chu.Manuscript, Department of Computer Science, Abo Akademi University, Turku,Finland, 1994. Abstract presented at the 13th Symposium on Principles of Dis-tributed Computing (PODC'94), Los Angeles, USA.University of Kuopio, Department of Computer Science and Applied Mathematics, P.O.Box1627, SF-70211 Kuopio, Finland, e-mail: [email protected] Akademi University, Department of Computer Science, SF-20520 Turku, Finland, e-mail:[email protected] NONCLAUSAL RESOLUTION SYSTEMFOR BRANCHING TEMPORAL LOGICJurat_e Sakalauskait_eInstitute of Mathematics and InformaticsAkademijos 4, 2600 Vilnius, Lithuaniae-mail: jurate.sakalauskaite @mlats.mii.ltAbstract. We present a proof system for branching propositional temporal logic.The system is based on nonclausal resolution. The system is proved to be complete. Theproof of completeness uses tableau construction for the logic.1. IntroductionTemporal logic is an appropriate formalism to reason about concurrentsystems. We consider here the branching propositional temporal logic BPTL,i.e. in underlying model of the logic any instant of time may split into di erentpossible futures. Branching time logic allows to reason about di erent possiblefutures. BPTL is a subsystem of branching time logic introduced in [BPM]. Thelanguage of BPTL contains the usual propositional connectives (say ^;_;k; )and temporal modalities. Time is assumed discrete and branching. In BPTL ifu and v range over formulas thenu means "u is true in each next state";} u means "u is true in some next state; in other words }u k ku;u means "u is always true (from now on )";u means "u is eventually true"; in other words u k ku.For BPTL a Hilbert style proof system can be obtained as a subsystemof Hilbert style proof system presented in [BPM].In this paper we present nonclausal resolution proof system for BPTL.Nonclausal resolution has the advantage over the classical clausal resolution ofnot requiring formulas to be in clause form. The proof of completeness usestableau construction for BPTL extracted from [BPM]. The idea to use tableauconstruction to prove completeness of nonclausal resolution proof system isadopted from [AM].In section 2 we present syntax and semantics of BPTL. In section 3 weintroduce a nonclausal resolution system R and explore soundness issues. Insection 4 we present the proof of completeness of R. Section 5 contains someconcluding remarks.2. Syntax and semantics of BPTLFormulas are de ned as usual with the help of propositional connectives^;_; and temporal modalities ;}; ; .A model for BPTL is a triple (S; P;R), where S is a set of states, P is anassignment of propositional letters to states and R is a binary relation on statessuch that for each s 2 S there is t 2 S (s; t) 2 R. For a propositional letter a anda state s 2 S; a 2 P (s) i a is true in s. We extend the interpretation over the348 model to all formulas in BPTL as follows, where b = (s = s0; s1; : : : ; si; : : :) is anin nite path through the model such that siRsi+1.1. s j= a i a 2 P (s) for a atomic;2. s j= kp i s 2 p;3. s j= p _ q i s j= p or s j= q;4. s j= p i 8b 8t (t 2 b implies t j= p);5. s j= p i 8t (sRt implies t j= p);6. s j= p i 9b 9t (t 2 b implies t j= p);7. s j= }p i 9t (sRt implies t j= p).p is satis able i s j= p for some model M and some state s in M . p isvalid i s j= p for each model M and each state s in M .3. The resolution system for BPTLIn this section we describe a nonclausal resolution proof system R forBPTL.3.1 PreliminariesThe sequence of formulas S0; : : : ; Sn is a derivation of Sn from S0 if, for all i,Si+1 is obtained from Si by rules of the system. We refer to the Si as a derivationstep. In the special case, where Sn = false, the sequence S0; : : : ; Sn is a refutationof S0. A proof of the formula w is a refutation of kw. We write ` w to denotethat formula w has a proof within R.The resolution system in this paper consists of simpli cation rules and de-duction rules. We also use the recursive-call mechanism which enables to apply rulesto subformulas of a proof step.We de ne the set of conjuncts of a formula u as follows: u is a conjunctof u, and if u is of the form u1 ^u2, then conjuncts of u1 and the conjuncts of u2are conjuncts of u.Simpli cation rulesThese rules simplify formulas, or put them in forms where the other rulescan be applied. They are all of the formu1; : : : ; um ) v:The rule u1; : : : ; um ) v can be applied to a formula Si, if the formu-las u1; : : : ; um occur as conjuncts in Si. Then in order to obtain correspondingderivation step we delete one occurrence of each ui and add the derived formulav to the conjunction.Deduction rules.The deduction rules have a formu1; : : : ; um 7 ! D:The rule u1; : : : ; um 7 ! v can be applied to a formula Si, if the formulasu1; : : : ; um occur as conjuncts in Si. Then, in order to obtain correspondingderivation step, add the derived formula v to the conjunction.349 Polarity.An occurrence of a subformula has positive polarity in a formula if it isembedded in the scope of an even number of explicit or implicit k0s.The recursive-call mechanism.The recursive-call mechanism for BPTL is: if there is a derivation of vfrom u then we may take Si+1 to be Si, with a positive occurrence of u replacedwith v.Usually, we do not mention the recursive-call mechanism in our proofs.Instead we mention the simpli cation and deduction rules that enables us toapply. 3.2 Soundness For our proof notion to be meaningful we require thatrules be sound, i.e., that they maintain satis ability: if S0 is satis able, then Snis satis able.The following lemma is helpful in proving soundness of the system.Lemma 3.1. If u v is valid and w1 is the result of replacing in w onepositive occurrence of u with v, then w w1 is valid.Proof. By induction on complexity of w.Each of simpli cation rules and deduction rules of R has the propertythat(u1 ^ : : :^ um) vis valid.This guarantees the soundness of R:Theorem 3.2(soundness) Assume that (u1 ^ : : : ^ um) v is valid for allsimpli cation rules and all deduction rules. If there is a derivation of Sn fromS0 and S0 is satis able, then Sn is satis able.Proof. By induction on the depth of recursive calls in the derivation.3.3 rules of the system R3.3.1 Simpli cation rulesTrue-false reduction rulesThese rules includetrue ) truetrue ) truefalse) falsefalse) falsetrue) true } true) truefalse) false } false) falseand the regular true-false reduction rules in propositional logic such asfalse ^u)false, etc. .Weakening rules350 u; v ) u; u; v) v:Negation rules k u) ku; k u) ku;k u) }ku; k} u) ku:k(u_ v) ) (ku^ kv);k(u^ v) ) (ku_ kv);kku) u:Similar rules are added for the other connectives.3.3.2 Deduction rulesResolution ruleTo state this rule , we introduce some substitution notation. We writeu < v > to indicate that the formula v occurs in u, and then u < w > denotes theresult of replacing exactly one occurrence of v with w in u. We write u < v1; : : : ; vn >to indicate that each of v1; : : : ; vn occurs in u ,and then u < w > denotes the resultof sequentially replacing exactly one occurrence of each of v1; : : : ; vn with w in u.The nonclausal resolution rule for propositional logic isA < u >;B < u >7 ! A < true > _B < false > :That is, if the formulas A < u; : : : ; u > and B < u; : : : ; u > have a commonsubformula u, then we can derive the resolvent A < true > _B < false >. This isobtained by replacing certain occurrences of u in A < u; : : : ; u > with true andcertain occurrences of u in B < u; : : : ; u > with false, and taking the disjunctionof the results. (Here "certain occurrences" means "one or more occurrences").As noted in [AM] this rule does not carry over to linear propositionaltemporal logic .It does not carry to branching propositional temporal logic aswell. The reason is that while u occurs in both A and B, it need not denote thesame truth value in all its occurrences; intuitively, each occurrence of u mayrefer to di erent instants of time.The resolution rule is sound in R under the following restriction: the oc-currences of u in A or B that are substituted by true or false, respectively, areall in the scope of the same number of 's and }'s and not in the scope of anyor in A or B; and all these occurrences of u may be except one are in thescope of only 's, each of which occurs with positive polarity.For example, consider the formulasA : } p ^ p andB :p ^} p:Taking u to be p the rule allows us to derive the resolvent351 ( } true ^ p) _ ( false ^} p):We only substituted true or false for those occurrences in the scope of }and . We cannot substitute the other occurrence of p in A, since it is in thescope of . Also, we cannot substitute the other occurrence of p in B, since wesubstituted p in } p in A, which is in the scope of }, but not in the scopeof .Modality rulesThese are rules to handle formulas in the scope of ; ; ; }.rule: u 7 ! u ^ u;rule: u 7 ! u _} u;rule: u; v 7 ! (u ^ v);} rule: }u; v 7 ! }(u ^ v).Induction ruleif ` k(w ^ u), thenw; u 7 ! ku ^}(u ^ kw) :Distribution rule u; v1 _ : : :_ vn 7 ! (u ^ v1) _ : : :_ (u ^ vn):One can write similar rules for the other propositional connectives, e.g.. 3.4 An example. Let us prove the validity of the formula(p ^ (p p)) p:In other words , we refutek(k(p ^ (p p)) _ p:The negation rules yield (p ^ (p p)) ^ kp:Since ` k(p ^ kp), we apply the induction rule and weakening to derive(p ^}kp) ^ (p p):Since ` k( (p p) ^ (p ^}kp)) (as we check later), as above we derive(k(p^}kp) ^}((p ^}kp) ^ k (p p)) ^ (p p):By weakening this yields352 } (k (p p)) ^ (p p):Since ` k(}(k (p p)) ^ (p p)) (as we check later), as above we derive(k(}(k (p p))) ^}} k (p p)):By rule and weakening this yields(k(}(k (p p))) ^} } k (p p)):By negation rule and weakening this yields( (p p) ^}} k (p p)):We apply the resolution rule (taking A =(p p))), B = }}k (p p)and u = (p p) to derivefalse _} } ktrue;and then by simpli cation and weakeningfalse:And thenfalse:We still have to show a) ` (pp) ^ (p ^ }kp) and b) ` k(}(k (pp)) ^ (pp)) to justify the application of the induction rule in this proof.We show a). b) is left to the reader. We apply rule and weakening to derive(p p) ^ (p ^}kp):By distribution rule and weakening this yields(kp ^ p ^}kp) _ ( p ^ p ^}kp):We apply twice the resolution rule to derivektrue_ (false ^}kp) _ ( false _ (p ^}ktrue)):Negation rules and simpli cations yieldfalse:4.Completeness of RIn this section we prove that the resolution system R is complete.353 Theorem 4.1(completeness) Every valid BPTL formula is provable inthe resolution system R:Proof. Our strategy as in [AM] is to show that any tableau proof canbe transformed into a resolution proof. In other words, if the tableau decisionprocedure nds ku of BPTL unsatis able, then ku has a refutation. If n is anode of the tableau then Un denotes the set of formulas at n. We prove a moregeneral fact:for every node n in the tableau for u ,if n is eliminated, then ^Un has therefutation.The proof is by complete induction on the stage at which nodes are eliminatedby the tableau decision procedure.(In other words ,the rank of a node is thestage at which it is deleted).We recall the de nition of a tableau construction for u and the tableau de-cision procedure ([BPM]). Formulas are classi ed as -formulas and -formulas12p^qpqppp12p_qpqpp } p.Tableau construction and decision procedure. Let u be BPTL formula. Let n0 bethe root of T and let Un0 = fku; } trueg. T , the tableau for ku, is constructedinductively by applying the following rules to nodes which are leaves of T .R : if 2 Un, then create a son n1 of n and de neUn1 = Un [ f 1; 2g:R : if 2 Un, then create two sons n1; n2 of n and de neUni = Un [ f ig; i = 1; 2:R} : let Vn = f}p1; : : : ;}pk; q1; : : : qmg be the set of next time formulas inparent n: Create k sons ni; i = 1; : : : ; k of n and de neUni = fpi; q1; : : : ; qmg:Notation. If the R =R =R}-rule was applied at node n; the n is called = =}-node. Root and sons of }-node are called pre-states. }-node is called a state.The construction of T is made nite by introducing the following twotermination rules:1. if p 2 Un and kp 2 Un; then this node is called closed and is notexpanded further.354 2. if a node m is about to be created as a son of an }-node n and thereis an ancestor n0 of n such that n0 is an immediate descendant of a }-node andUn0 = Um; then do not create m but instead connect n to n0 with "feedback" edge.A node of T is eliminated in the following three cases:1) it contains a proposition and its negation;2) it is or -node and all of its descendants have been eliminated; it is}-node and one of its descendants have been eliminated;3) if the node is a pre-state and contains v and on no path from the nodedo v occurs.ku is found unsatis able if and only if all nodes of the tableau havebeen eliminated.Consider a node n. Assume that if the node m has already been eliminatedthen ^Um has a refutation, and suppose that n is eliminated. To prove complete-ness we show that Un is refuted in R. We analyse three cases; they correspondto the three possible reasons why n can be eliminated.Lemma 4.1. 1) If Un contains p and kp then ^Un can be refuted in R;2) if n is = =}-node and ^Un1 is refuted in R= ^ Un1 and ^Un2 are refuted inR= ^ Uni for some i is refuted in R then ^Un can be refuted in R:Proof. In case 1) ^Un = p ^ kp ^ for some formula : We derive false byresolution on p and simpli cations.We consider 2).Let Un1 be created from Un by -rule. Then p ^ q 2 Un and Un1 = Un [ fp; qgor p 2 Un and Un1 = Un [ fp; pg: Assume, there is a refutation of ^Un1 to showthat there is a refutation of ^Un: In both cases the refutation of ^Un1 can beextended to the refutation of Un:Let Un1 and Un2 be obtained from Un by -rule. Then p_ q 2 Un or p 2 Un:In the rst case (^Un1)_ (^Un2) is obtained from ^Un by the distribution rule . Inthe second case (^U1)_ (^Un2) is obtained from ^Un by applying -rule and nextthe distribution rule. Assume there is a refutation of ^Un1 and ^Un2; to showthere is a refutation of ^Un: In both cases the refutation of Un1 and Un2 can beextended to the refutation of Un:Let Uni be created from Un by }-rule. Then ^Un = }p1^ : : :^}pk^ q1^ : : :^ql^v1^ : : :^vm; where vi's are not of the form w or }w and ^Uni = pi^q1^ : : :^ql:Assume there is a refutation of ^Uni in R: The following lines are refutation of^Un in R:1. }p1 ^ : : :} pk ^ q1 ^ : : :^ ql ^ v1 ^ : : :^ vm^Un2. }pi ^ q1 ^ : : :^ qlby weakening from 1.3. }pi ^ (q1 ^ : : :^ ql)by -rule and weakening from 2.4. }(pi ^ q1 ^ : : :^ ql)by } -rule and weakening from 3....S. } falseby assumption on355 Uni.S+1. falseby simpli cationfrom S.Lemma 4.2. If a node n is a prestate which contains v and on no pathfrom the node do v occurs, then ^Un can be refuted in R:To prove lemma 4.2 we prove some auxiliary lemmas.Let [n] be the set of prestates accessible from n by choosing at }-node theson corresponding to } v: Then v 2 Un and on no path from u 2 [n] do v occurs.Let Yu; u 2 n[ [n] ; denote a part of prestate u obtained by erasing 's in a statepreceding u: We call Yu universal next time part of u: We will want to constructuniversal next time parts of prestates in [n] : More precisely, we get wi's thatsay what universal next time parts of prestates at depth i counting from n willbe true then. We call wi the i-th fringe. For example, if Un = f ( q _}q1); vg;then w1 = (q ^ ( q _}q1)) _ ( q _}q1); where (q ^ ( q _}q1); ( q _}q1); areuniversal next time parts of prestates at depth 1 in [n]Lemma 4.2.1. Given a prestate n R can derive formulas wi built from theuniversal next time partspli of prestates from [n] at depth i from Yn with _ and: Furthermore, eachpli occurs in the scope of exactly i 's.Proof. Put wo = ^Yn: We show how to obtain wi+1 from wi: Let Yt be auniversal next time part of a prestate t 2 [n] in wi at depth i: Then Ut = Yt [ v:Let [t] be the states accessible from t by ; nodes which include } v: Wewill show that we derive _u2[t]^(Uu f} vg) from Yt = ^(Ut f vg) in R: In factapplying ; -rules and distribution rule we derive _u2[t]+(^Uu) from Ut; where[t]+ is all states accessible from t by ; nodes, i.e. [t]+ is obtained from [t] byadding states which include v: But these states are eliminated, so refuted byinduction. Thus we derive _u2[t](^Uu) from ^Ut: Hence we derive _u2[t](Uu f} vg)from ^(Ut v): By weakening we drop from Uu all the formulas other thanpre xed with : So we derive _u2[t]( ^p2Yu p): Apply -rule to pull 's out ofconjunction. We derive _u2[t]( ^p2Yu p): Applying this procedure to all universalnext time parts at depth i from wi we derive wi+1:Lemma 4.2.2. Given a prestate t 2 n [ [n] ^Yt ^ v can be refuted in R:Proof. Ut = Yt [ f vg as mentioned above. Without loss of generality wecan assume that rule was applied to v at t to construct sons t1; t2: ThenUt1 = fUt f vgg [ fvg = Yt [ fvg: But t1 have been eliminated; otherwise we havepath with v from n: Thus Ut1 is refuted in R by induction.Lemma 4.2.3. For any i > 0 ` k(wi ^}iv):Proof. For i = 0 distribute v by the distribution rule and apply lemma 4.2.2.For i > 0 distribute }iv by the distribution rule. Next use } rule to push v356 inwards. Then we obtain conjunctions of universal next time parts of prestatesin the depth i from n[[n] with v: Applying lemma 4.2.2 we obtain the refutationof this conjunctions. Then applying reduction rules false _A ! A; } false !false we obtain refutation of wi ^}iv:The lemma 4.2.4 is the main lemma of completeness proof.Lemma 4.2.4. If for all i ` k(wi ^}iv); then ` k ^ Un:Proof. ^Un = ^Yn ^ v by assumption. Assume ` k(wi ^ }iv); for alli. We putw0 = ^Yn: From w0 and v; by the induction rule and weakening we derive} (v ^ kw0)since ` k(w0 ^ v) by assumption.By lemma 4.2.1 we can derive w1: Furthermore, ` k(w1 ^ }v) by assumption.Thus ` k(w1 ^}(v ^ kw0)): This by the induction rule yields} (}(v ^ kw0) ^ kw1):In general, we can get all wi's and we have that ` k(wi ^ }iv); by assumption.Successive applications of the induction rule will give} (}(: : :} (v ^ kw0) ^ : : :) ^ kwm 1) ^ kwm)for any m:We weaken this to} (}(: : :} (kw0) ^ : : :) ^ kwm 1) ^ kwm)Call this formula m and de ne also m by m = m:The nite model property tells us that there are only nitely many universalnext time parts generated from w0. Thus for some s we have all universal nexttime parts generated from w0 in R: We check that ` k(w0 ^ s): Take s + 1-thfringe ws+1 of w0: Each universal next time part Y in ws+1 is in the scope of s+1of 's and it is denied in s: Furthermore kY is in the scope of s + 1 of }'s ins: Applying the resolution rule we obtain false derived from w0^ s; i.e. wo ^ shas the refutation in R: Thus we can apply the induction rule (and weakening)once more and get(k s ^} s):Call this formula : says that at each point in depth s + 1 we are in one ofuniversal next time prestates reachable from w0 and that there are the nextinstant we are in none of them. Of course, this cannot be the case; in fact, wecan refute k s^} s : we derive the rst fringe of all universal next time parts ofprestates in k s and check that universal next time parts of prestates in thesefringes were already in s: Thus by the resolution rule we derive false andhence false.357 Proof of lemma 4.2. From lemma 4.2.3 and lemma 4.2.4 we obtain that ^Uncan be refuted in R.5. Concluding remarksWe have presented a nonclausal resolution approach to theorem proving inBPTL with temporal modalities ;}; ; : We show that presented system iscomplete. We expect to generalize this approach to get nonclausal resolutionsystem for branching temporal logic with modality Until.References[AM] (1990) Abadi M. and Z.Manna, Nonclausal deduction in rst order tem-poral logic, Journal of ACM 37, 2, 279-317.[BPM] (1983) Ben-Ari M., A.Pnueli and Z.Manna, The temporal logic of branch-ing time, Acta Informatica 20, 207{226.358 Composed Reduction SystemsDavid Sandsdiku, University of CopenhagenyAbstractThis paper studies composed reduction systems: a system of programsbuilt up from the reduction relations of some reduction system, by meansof parallel and sequential composition operators. The trace-based compo-sitional semantics of composed reduction systems is considered, and a newgraph-representation is introduced as an alternative basis for the studyof compositional semantics, re nement, true concurrency (in the case ofcomposed rewriting systems) and program logics.1 IntroductionReduction systems are simply sets equipped with some collection of binary\rewrite" relations. A reduction systems can be thought of as an abstractview of computation, embodying the fundamental computational concepts ofiteration, termination, and nontermination. Computation is the process of re-peatedly rewriting, beginning with some object of the set, and terminationcorresponds to obtaining an object which cannot be rewritten further; nonter-mination is the ability to rewrite inde nitely.Since reduction systems have little structure, there are relatively few proper-ties one can state about theses systems, although unique-termination (\Church-Rosser") properties of reduction systems have been studied by eg. Rosen[Ros73], Hindley [Hin69] Staples [Sta74].In this paper we consider systems (\programs") whose basic components arethe reduction relations of some reduction system. These systems, which we callcomposed reduction systems, are built by composing reduction relations withtwo natural composition operators: parallel and sequential composition. Com-posed reduction systems are not necessarily reduction systems, but they possessa notion of a \reduction step", and a corresponding notion of termination.Parallel composition allows arbitrary interleaving of reduction steps. Inthe simplest case, the parallel composition of two reduction relations cor-responds to the union of these relations. Parallel composition terminateswhen, simultaneously, both sub-systems have terminated.This work was partially funded by ESPRIT BRA 9102, \Coordination"yUniversitetsparken 1, 2100 K benhavn , DENMARK. e-mail: [email protected] Sequential composition, on the other hand, takes us outside the realmof reduction systems (over the given set). The sequential composition oftwo reduction relations is the system which behaves like the rst reductionrelation, until termination of the rst system, after which it behaves likethe second system. The sequentially composed system is said to terminatewhen the second sub-system has terminated.Note, then, that the sequential composition of two reduction relations (thesimplest case) is not the relational composition of these relations. Composedreduction systems over a given reduction system are built from arbitrary se-quential and parallel compositions of reduction relations.In this paper we study the semantics of composed reduction systems, ex-pressed in terms of its constituent reduction relations. We focus on a com-parison relation for programs which partially orders programs on the basis oftheir \input-output" behaviours, and is also a precongruence with respect toprogram construction.In the rst part of the paper (Section 2) we consider a standard composi-tional semantics based on "reactive traces" (sequences of object-pairs) derivedfrom the SOS-like rules which give the operational semantics for composed re-duction systems. We outline some of the program laws that can be obtained,and consider the relationship to an alternative form of parallel composition.In the second part of the paper (Section 3), we de ne a static graph represen-tation for programs, and argue that it forms a better basis (than the transitiontraces) for the study of:compositional semantics, since it is higher-level than the transition traces;re nement laws, since the graphs can also be de ned compositionally;concurrency, since \concurrently active" reduction relations are explicitin the representation, andprogram logics, since logics for the underlying reduction systems can beused to reasoning about paths through the graph.Related Work and Applications This work grew out of the study of com-position of speci c kind of reduction system, namely programs in the Gammamodel [BM93], which can be thought of as conditional associative-commutativestring rewriting. The composition operators for Gamma were introduced in[HMS92], and the compositional semantics and laws were studied in [San93a][San93b].The development of section 2 is a direct (and straightforward) adaptation of[San93a][San93b] to this more general setting. The graph representation in sec-tion 3 is new, and is particularly relevant from the point of view of composedGamma programs. 1The techniques given here may also be interesting when applied to otherconcrete reduction systems. In particular we have in mind rewriting systems1For example, through this representation have discovered additional laws for Gammaprograms.360 in which the objects rewritten have some structure (eg. trees, graphs, strings),and the reduction relation is speci ed by rules for rewriting a substructure, interms of purely local conditions. For such systems (eg. the usual notion of termrewriting [Klo92] [DJ89]) there is a natural (implicit) notion of concurrency,viz. disjoint parts of a substructure can be rewritten asynchronously, and henceconcurrently. This view of rewriting as a natural vehicle for concurrency andparallel programming is central to Meseguer's approach [Mes92][MW91]; thecomposition operators studied here also make sense in that setting.Another form of reduction system, where one could reasonably employ thecomposition operators studied here, is the guarded iteration statement from[Dij76], also known as action systems [Bac89a][Bac89b]. Action systems arenondeterministic do-od programs consisting of a collection of guarded atomicactions, which are executed nondeterministicly so long as some guard remainstrue. In their uninitialised form, guarded iteration statements can be thought ofas reduction systems over program states. The method of parallel execution isto allow actions involving disjoint program variables to be executed in parallel,which is consistent with the rewriting viewpoint above. In [Bac89b] Back studiescompositional notions of re nement for action systems with respect to a meta-linguistic parallel composition operator. 2 The parallel composition studiedhere is strictly more general since it permits parallel composition of sequentiallycomposed systems.2 Operational and Compositional SemanticsIn this section we give the operational semantics of composed reduction systemsbuilt from basic reduction relations, parallel and sequential composition.In what follows, we assume some reduction system hU; f!rgr2Ri, whereU is a set, with typical elements M;N;M1 : : :. We will sometimes refer to theelements of U as states. The reduction relations, f!rgr2R are just binary re-lations on states.We will think of the elements of the indexing set R, rangedover by r; r2 : : :, as the basic units of our composed reduction systems. Some-what improperly, for more concrete examples we will think of R as the set ofrepresentations of the corresponding reduction relation.With respect to some r, we say thatM reduces to N if M!rN (ie. (M;N) 2 !r);M converges immediately, written M#r, if :9N:M!rN .For the moment we consider composed reduction systems, ranged over by P ,Q, P1, Q1 etc, given by the following grammar:P ::= r j P ; Q j P kQHenceforth we will use the terms \composed reduction system" and \program"synonymously.2UNITY [CM88] has a similar composition operator, called union, but UNITY is not areduction system in the same sense because the notion of termination for UNITY is that ofstability|reaching a xed-point|rather than inactivity.361 2.1 SOS semanticsBecause of the presence of sequential composition, programs cannot be viewedas reduction systems over U, since the program is not a static entity. To de-ne the semantics for these programs we de ne a single step transition relationbetween con gurations. The con gurations are program-state pairs, writtenhP;Mi. The nal result of a computation is given by an immediate-convergencepredicate, #, on con gurations. Single step reduction and immediate-convergenceis given by SOS-style rules in gure 1. It is easily veri ed that immediate con-M!rNhr;Mi ! hr; NiM#rhr;Mi#hP;Mi ! hP 0;M 0ihP ; Q;Mi ! hP ; Q;M 0ihP;Mi#hP ; Q;Mi ! hQ;MihP;Mi ! hP 0;M 0ihP kQ;Mi ! hP 0 kQ;M 0i hQ;Mi ! hQ0;M 0ihP kQ;Mi ! hP kQ0;M 0i hP;Mi# hQ;Mi#hP kQ;Mi#Figure 1: Structural Operational Semantics of composed reduction systemsvergence of a con guration corresponds to the absence of any transitions forthat con guration. In other words, hP;Mi ! hQ;Ni for some hQ;Ni if andonly if :(hP;Mi#).Let ! denote the transitive, re exive closure of !. By a small abuse ofthe notation, we will write hP;Mi ! N to mean that there exists some hQ;Nisuch that hP;Mi ! hQ;Ni and hQ;Ni#.2.2 Behavioural orderingsIn this paper we will focus on the relational (input-output) behaviours of aprogram. A number of \re nement" orderings on programs arise from the var-ious natural ways to compare programs on the basis of their input-output (orrelational) behaviour. One possible \behaviour" which we should consider sig-ni cant is the possibility of nontermination for a given input. Non-termination,or \divergence" is a predicate on program con gurations:De nition 1 P may diverge on M , hP;Mi", if there exist fhPi;Miigi2! suchthat hP0;M0i = hP;Mi and hPi;Mii ! hPi+1;Mi+1i.It is convenient to abstract the possible relational behaviours of a programas a set of possible input-output pairs. This includes the possibility of non-termination, which we represent as a possible \output" using symbol `?' (62U):De nition 2 The behaviours of a program P are given byB(P ) = f(M;N) j hP;Mi ! Ng[ f(M;?) j hP;Mi"g362 Note that for every P ,M , either (M;N) 2 B(P ) for some N , or (M;?) 2 B(P )(or both). There are a variety of orderings on programs obtained by comparingtheir behaviours: the partial correctness ordering ignores divergent behaviours;the lower and upper orderings are formed by considering the associated discretepower-domain orderings on U?. In this study we only consider the strongcorrectness ordering, which attaches the same signi cance to nonterminatingcomputations as to the terminating ones. The strong correctness ordering isde ned to be the largest (pre)congruence which satis esP vo Q) B(P ) B(Q)This is given directly by the following:De nition 3 Let C range over program contexts. We de ne strong precon-gruence (vo) and strong congruence( o) respectively by:P vo Q () 8C:B(C[P ]) B(C[Q])P o Q () P vo Q & Q vo P2.3 LawsIn this section we present a number of the basic laws of strong precongruence,and show the relationship to an alternative de nition of parallel composition.Let denote the empty reduction relation, satisfying 8M:M# . For example,in conditional rewriting systems, this could be represented by a reduction rulewith the condition false.Proposition 41: P ; (Q ; R) o (P ; Q) ; R5: kP o P ko P2: P k(Q kR) o (P kQ) kR 6: P vo P ;3: P kQ o Q kP7: P o ; P4: Q ; (P1 kP2) vo (Q ; P1) kP2 8: P vo P kPThese are just a few of the laws of strong precongruence. In fact, almost all ofthe partial correctness laws of composed Gamma programs [San93b] hold forthese more general composed reduction systems. Note in particular that law 6cannot be strengthened to an equality, ie.P ; 6 o PThe intuition for this is that acts as a de-synchroniser for parallel compo-sition: P must synchronise with its context in order to terminate, but withP ; , P is allowed to terminate autonomously, leaving to synchronise withits context|which it is trivially always able to do. As an example consider thefollowing two term rewrite rules, where a and b are constants: a! b and b! a.It is easily seen that h(a! b) k(b! a); ai can never terminate, buth(P1 ; ) kP2; ai ! aand so (P1 ; ) kP2 6vo P1 kP2. 363 An Alternative Parallel CompositionThere is a natural alternative form of parallel program composition, which doesnot require that the two programs terminate synchronously. Extend the syntaxof the language with P ::= P1 kP2, and add the operational rules:hQ;Mi#hP kQ;Mi ! hP;MihP;Mi#hP kQ;Mi ! hQ;MiThe expected associativity and commutativity properties also hold for k . Somerelationships with ; and kcan be summarised in the diagram in gure 2, wherethe arrows (!) depict the ordering vo. To give some intuition to the fact in(P ; ) k(Q ; ) o (P ; ) k (Q ; )"P kQ% -(P ; ) kQ% P k(Q ; )% -P ; Q P kQ Q ; PFigure 2: Relationships between Compositionsthis diagram, consider the increasing chain of \behaviours":P ; Q vo (P ; ) kQ vo P kQ vo (P ; ) k(Q ; )For P kQ to terminate, either P (or some derivative thereof) terminates fol-lowed by Q, or vice-versa, and either P or Q is left to synchronise its terminationwith the context. The system (P ; ) kQ has fewer behaviours because al-though reductions from P are potentially concurrent with reductions from Q,P must always terminate rst. (P ; ) k(Q ; ), on the other hand, exhibitsmore behaviours since P and Q are \concurrent", can terminate autonomously,and neither of them is required to synchronise its termination with the context.2.4 Trace SemanticsWe can characterise vo (in order to prove the laws of the previous section) bynding a compositional semantics which is consistent (ie. sound) with respect tothe behaviours. Clearly the behaviours of a program will not su ce as its deno-tation. As is well-known from the study state-based concurrency, it insu cientto use sequences of states as a means of distinguishing programs. The solutionwe adopt follows a simple approach to modelling shared-state (interleaving)concurrency via sequences of state-pairs3 (eg. sequences of \moves"[Abr79];3The set of all such sequences for a program can be thought of as an \unraveling" ofthe program's resumption semantics [Plo76][HP79]. This \unraveling" leads to a mathemati-cally simpler domain (no powerdomains) which is more amenable to further re nements thanresumptions.364 \abstract paths" of [Par79]). Following the terminology of [Bro93], we will usethe term transition traces, or simply traces to refer to this kind of sequence. Inthese models, a pair of states in the trace of a program represents an atomiccomputation step of the program; adjacent pairs in any given sequence modela possible interference by some other process executing in parallel with theprogram.The transition traces have a straightforward operational speci cation:De nition 5 The transition traces of a program, T[[P ]], are the nite and in-nite sequences of state-pairs, given by:T[[P ]] = f(M0; N0)(M1; N1) : : :(Mk; Nk)jhP;M0i ! hP1; N0i &hP1;M1i ! hP2; N1i & : : : & hPk;Mki#;&Mk; = Nkg[ f(M0; N0)(M1; N1) : : :(Mi; Ni) : : : jhP;M0i ! hP1; N0i & hPi;Mii ! hPi+1; Nii; i 1gThe intuition behind the use of transition traces is that each transition trace(M0; N0)(M1; N1) : : : 2 T[[P ]]represents computation steps of program P in some context; starting with stateM0, each of the pairs (Mi; Ni) represents computation steps performed by(derivatives of) P and the adjacent states Ni 1;Mi represent possible inter-fering computation steps performed by the \context". If the trace is nite thenthe last step corresponds to the termination \step" for a derivative of P .StutteringClearly the behaviours of a program are obtainable from its transition traces,by considering the \chained" traces of the form: (M0;M1)(M1;M2) : : :. Tran-sition traces are adequate for giving a compositional semantics to composedreduction systems, by interpreting sequential composition as (set-wise) traceconcatenation, and parallel composition as interleaving (with the proviso thatinterleaved nite traces must agree on their last elements).However, the transition traces distinguish between programs which com-pute at di erent \speeds". For example, considering the empty action systemDO OD, then the transition traces of DO OD are di erent from those of(DO OD) ; (DO OD) 4 The key to obtaining a better level of abstractionis to equate processes which only vary by \uninteresting" steps. This is the\stuttering equivalence" well-known from Lamport's work on temporal logicsfor concurrent systems [Lam89]. Closure under stuttering equivalence has beenused by de Boer et al [dBKPR91], and by Brookes [Bro93] to provide fullyabstract semantics for languages with shared-state and parallel composition.Following Brookes [Bro93] we de ne a closure operation for sets of transitiontraces:4In order to obtain full abstraction for a while-language with parallel composition, Hennessyand Plotkin added a co-routine command, which is able to distinguish these programs.365 De nition 6 Let denote the empty sequence. Let range over nite se-quences of state pairs, and range over nite or in nite sequences. A set Tof nite and in nite traces is closed under left-stuttering5 and absorption if itsatis es the following two conditionsleft-stuttering 2 T; 6=(M;M) 2 Tabsorption (M;N)(N;M 0) 2 T(M;M 0) 2 TLet zT denote the left-stuttering and absorption closure (henceforth just closure)of a set T .In [dBKPR91] a slightly di erent closure operation is used, in which onlystuttered steps can be absorbed. With respect to the above closure conditions,the di erence is that in the clause for absorption we should also require thateither M = N or N = M 0. This leads to a coarser abstraction for speci creduction systems; for example, the composed string-rewriting systems: (1; 1!2; 2) k(1 ! 2) and (1 ! 2) have di erent traces under the stuttering-closureoperation of [dBKPR91], but are the same under z.Clearly the behaviours are also derivable from zT[[P ]], and what is more,zT[[P ]] can be speci ed compositionally (using monotonic operators) which givesthe following:Proposition 7 zT[[P ]] zT[[Q]] =) P vo QIn the appendix we give the compositional de nition of transition traces.For a speci c collection of reduction relations over some given universe, thetransition traces may not be fully abstract. In other words, we cannot reversethe implication in the above proposition. For a speci c example where fullabstraction fails, see [San93a]. Even if we allow all reduction relations over agiven universe it is unclear as to whether the transition traces are fully abstract.3 Graph RepresentationIn this section we outline a static graph-representation for composed reductionsystems, and argue that it forms a better basis for the study of compositionalsemantics, re nement, true concurrency and program logics.The graph representation we will develop is something like a nite, acycliccontrolow graph, where each node corresponds to a simple form of loop. Anode carries a set of reduction relations which are (con)currently active; anedge represents an internal termination step, where the child node may inheritsome reductions from the parent but adds some new active reductions. Fromthe viewpoint of the observational semantics, we will identify the graph withits set of complete paths.The idea is best illustrated with an example. Consider a program consistingof four reduction relations: (r1 ; r2) k(r3 ; r4)5Notice that we say left-stuttering to re ect that the context is not permitted to changethe state after the termination of the program. In this way each transition trace of a programonly charts interactions with its context up to the point of the programs termination.366 Initially r1 and r3 are active and thereby able to contribute to the reductionsteps. At some time, r1 or r3 may be able to terminate. Suppose r1 terminatesrst; then r2 and r3 become active. Symmetrically, if r3 terminates rst thenr1 and r4 become active. Continuing in this way we construct the graph forthis program:fr1; r3gfr3g.&fr1g;fr1; r4gfr2; r3gfr1g&.fr3gfr2; r4gThe operational semantics of such graphs should be transparent: control beginsat the root-node of the graph, and each node is labeled with a set of concurrentlyactive reduction-relations; each arc is labeled with a set of reduction relationswhich must converge with respect to the current state for the control to beallowed move along that edge.3.1 From SOS rules to Graph RepresentationThe graph representation will be constructed from two \abstract interpreta-tions" of the one-step evaluation relation. Consider any possible one-step re-duction on con gurations: hP;Mi ! hP 0;M 0iFrom inspection of the rules it is clear that either:1. P 6= P 0 and M =M 0, or2. P = P 0 and M!rM 0 for some reduction r in P .In the terminology of [HMS92], we call a transition of the rst kind as a passivestep, and one of the second kind as an active step. The passive step correspondsto some internal termination step in which the left-operand of a sequentialcomposition is discarded. The convergence of a con guration can similarly beconsidered to be a passive step. An active step corresponds to a reduction stepon the state-component of a con guration.We construct the graph representation of a given program by separatelyabstracting :1. the passive steps, which will give us the arcs in the graph, and2. the active steps which will tell us what reductions are contained in thenodes.Abstract Passive Steps We abstract the passive steps performable by aprogram via a (labeled) transition system with judgements of the form P R;Q, where R is a set of reductions. As an auxiliary, we de ne a notion ofconvergence for programs which is an abstraction of the convergence predicate367 for con gurations. The abstract convergence predicate is trivial: a programcan converge only if it does not contain any sequential compositions. Let [P ]denote the set of reduction relations that comprise the program P . Figure 3de nes the rules, closely following the form of the rules of gure 1.r#P R; P 0P ; Q R; P 0 ; QP#P ; Q [P ]; QP R; P 0P kQ R; P 0 kQQ R; Q0P kQ R; P kQ0 P# Q#P kQ#Figure 3: Abstract Passive StepsActive Region We abstract the active steps of a program simply by sayingwhich reductions in the program are immediately applicable. The immediately-applicable reductions are just those which are not \guarded" by a sequentialcomposition on their left. The active region of a program P , written dPe, isde ned inductively by:dre = frgdP ; Qe = dPedP kQe = dPe [ dQeThe following proposition states the precise relationship between the aboveabstractions and the transition relation of the structural operational semantics:Proposition 8 For all composed reduction systems P over some universe U,and for all M;N 2 U,hP;Mi ! hQ;Ni if and only if either1. M = N and P R; Q for some R such that for all r 2 R, M#r, or2. P = Q, and there exits some r 2 dPe such that M!rN .The graph form will be constructed by combining the passive steps with theactive regions. We note the following facts about the passive steps.Passive steps are normalising: ie. there are no in nite chains of the formP R1; P1 R2; P2 R3; , since the size of the programs are strictly decreasingwith each passive step.For any P , the number of R and Q such that P R; Q is nite.368 In fact, ; satis es a \strong diamond property", namely that if P R1; P1 andP R2; P2 with P1 6= P2, then there is a Q such that P1 R2; Q and P2 R1; Q.The graph form of a program P is rooted directed nite acyclic graph formedby (i) forming the passive-graph according to the ; relation, and then (ii)mapping the function d e over the nodes of the passive-graph to extract theiractive reductions. So, for example, taking the program (r1 ; r2) k(r3 ; r4) we(i) construct the passive graph:(r1 ; r2) k(r3 ; r4)fr3g.&fr1g;(r1 ; r2) kr4r2 k(r3 ; r4)fr1g&.fr3gr2 kr4and (ii) abstract the active region from each node to obtain:fr1; r3gfr3g.&fr1g;fr1; r4gfr2; r3gfr1g&.fr3gfr2; r4g3.2 Reasoning from GraphsIt should be clear from Proposition 8 that the transition traces of a programcan be constructed from its graph. In fact, from the point of view of giving theoperational semantics a program P , we can use the tree corresponding to thegraph.We will show how the graph representation can be used to reason aboutstrong equivalence and strong approximation between programs. For the pur-pose of the behaviours (or transition traces) of programs, only need the set ofcomplete paths through the graph.Let paths(P ) denote the complete (and necessarily nite) paths in thegraph of P . So the graph of a program r1 ; (r2 kr3) is just fr1g fr1g! fr2; r3g,and so the program has just a single path, hfr1g fr1g fr2; r3giThe domain of paths (ranged over by p1, p2 etc.) are the nite, nonemptyodd-length sequences of sets of reduction relations. Writing concatenation ofsequences by juxtaposition, if R, R1, R2 etc. range over sets of reduction re-lations, then a path is either a sequence of length one, hRi, or a sequenceof the form hR1; R2ip for some path p. Alternatively we will denote a pathby hn1a1n2a2 : : :ak 1nki where the ni (nodes) and ai (arcs) are again sets ofreduction relations.The paths of a composed reduction system can be de ned directly by in-duction on the passive steps:De nition 9 The paths of a program P , paths(P ), is the least set of nonemptysequences of sets of reduction relations, such that:369 if P# then [P ] 2 paths(P )if P R; P 0 and p0 2 paths(P 0) then hdPe; Rip0 2 paths(P ).Now, in turn, the transition traces of a program can be de ned in terms ofits paths. This is given in the appendix.The rst implication of this is that if two programs have equivalent paths,then they must be strongly equivalent. We conjecture a tighter relationship,namely:Conjecture 10 paths(P1) = paths(P2) if and only if (P1 o P2) is provablefrom the equational theory generated by the laws:(i) r o r kr (ii) kis associative and commutative (iii) ; is associative.In fact, other than a few laws for the desynchroniser , we have not found anyother strong equivalences (than those derivable from the above). The inequa-tional theory for vo is, however, much richer. But proving inequalities from thetransition traces (so far the only method we have) is rather tedious. Now weconsider how to reason about vo by building comparison relations on path-sets.3.3 Path ComparisonsEach \node" represent the possible reductions possible at that node. The reduc-tions on each \edge" represent the termination condition|a set of reductionswhich must be inapplicable for control to transfer along that edge. Comparingtwo paths of the same length, one path describes a broader range of behavioursthan another, if it has at least as many reductions at each corresponding node(the odd elements of the sequence) but no more reductions on each edge (theeven elements of the sequence). This leads to the following:De nition 11 (Path Inclusion)Two paths of equal length,p = hn1; a1; n2 : : : ak 1; nki and p0 =hn01; a01; n02 : : :a0k 1;n0ki,are in the path-inclusion ordering, written p p0, if1. nk =n0k,2. nin0i, for all i < k, and3.a0i ai, for all i k.The path inclusion ordering is de ned on composed reduction systems as:P Q if and only if for all p 2 paths(P ) there exists a path q 2 paths(Q)such that p q.Note that there is a stronger condition on the last node of a path. This isbecause the last node carries additional signi cance, since it is also a terminationcondition.Proposition 12 P Q) P vo Q 370 proof Since the transition traces are easily constructed from the paths, theproposition can be proved by showing that if P Q then the transition tracesof P are contained in those of Q. Given the fact that the behaviours are ex-tractable from the paths, a more direct (and arguably more useful) proof can begiven by a compositional de nition of the paths of a program. A compositionalconstruction of paths is given in the appendix.2Consider, for example, the composed reduction system r1 ; r3 ; (r2 kr4):paths(r1 ; r3 ; (r2 kr4)) = fhfr1g ; fr1g fr3g fr3g fr2; r4gig :Since we have hfr1; r3g ; fr1g fr2; r3g fr3g fr2; r4gi 2 paths((r1 ; r2) k(r3 ; r4))then we can conclude that r1 ; r3 ; (r2 kr4) vo (r1 ; r2) k(r3 ; r4).Path StutteringThe main limitation of the path-inclusion ordering is that we can only comparepaths of equal length. So, for example, we cannot prove the inequaility:(r1 kr3) ; (r2 kr4) vo (r1 ; r2) k(r3 ; r4)since the path of (r1 kr3) ; (r2 kr4) (there is only one) is shorter than all thepaths of (r1 ; r2) k(r3 ; r4).The solution is to de ne an analogue of closure under stuttering, at the levelof paths. We do not literally add stuttering paths, but rather, paths which giverise to stuttering. Consider a path of the form p1hn; aip2. The arc a representsan internal termination step. Operationally, after this step is performed, wecould o er some reductions from a, say n0, and none will be applicable|andhence we can converge for all reductions in n0. Hence the path p1hn; a; n0; n0ip2describes no more (but no fewer) behaviours than p1hn; aip2. This leads us toa de nition of path stuttering equivalenceDe nition 13 Let path stuttering equivalence, =s, be the least equivalence re-lation on paths such thatfor all paths p1, p2 (p1 possibly empty), and for all sets of reductions n, a,n0 such that n0 a n, p1hn; aip2 =s p1hn; a; n0; n0ip2For example, hfr1; r2g fr1; r2g fr3gi =s hfr1; r2g fr1; r2g fr1g fr1g fr3gi:With the development that follows, we will be able to conclude that(r1 kr2) ; r3 o (r1 kr2) ; r1 ; r3Now we use path stuttering equivalence to coarsen the path inclusion or-dering. As before we de ne a preorder on paths, and extend this to programsin the obvious way:De nition 14 (Stuttered Path Inclusion) Two paths, p and p0, are in thestuttered path-inclusion ordering, written p s p0 if there exists p1, p2 such thatp =s p1 p2 =s p0:On composed reduction systems we de ne P s Q if and only if for allp 2 paths(P ) there exists a path q 2 paths(Q) such that p s q.371 Proposition 15 P s Q) P vo Qproof (Outline) It is su cient to show that P s Q) zT[[P ]] zT[[Q]]. Themain step is to show that the closed (z) traces corresponding to a path p1hn; aip2are equal to the closed traces of p1hn; a; n0; n0ip2, whenever n0 a n. Weomit the details.2References[Abr79] K. Abrahamson. Modal logic of concurrent nondeterministic pro-grams. In Proceedings of the International Symposium on Seman-tics of Concurrent Computation, volume 70, pages 21{33. Springer-Verlag, 1979.[Bac89a] R. Back. A method for re ning atomicity in parallel algorithms.In PARLE '89, volume II, number 365 in LNCS. Springer-Verlag,1989.[Bac89b] R. Back. Re nement calculus, part ii: Parallel and reactive pro-grams. In Stepwise Re nement of Distributed Systems: Models,Formalisms, Correctness, number 430 in LNCS. Springer-Verlag,1989.[BM92]J.-P. Banâtre and D. Le M etayer, editors. Research Directionsin High-level Parallel Programming Languages. Springer-Verlag,LNCS 574, 1992.[BM93]J.-P. Banâtre and D. Le M etayer. Programming by multiset trans-formation. CACM, January 1993. (INRIA research report 1205,April 1990).[Bro93]S. Brookes. Full abstraction for a shared variable parallel language.In Logic In Computer Science (LICS). IEEE, 1993.[CM88]K. M. Chandy and J. Misra. Parallel Program Design: A Founda-tion. Addison-Wesley, 1988.[dBKPR91] F. S. de Boer, J. N. Kok, C. Palamidessi, and J. J. M. M. Rutten.The failure of failures in a paradigm for asynchronous communica-tion. In CONCUR '91, number 527 in Lecture Notes in ComputerScience, pages 111{126. Springer-Verlag, 1991.[Dij76]E. Dijkstra. A Discipline of Programming. Prentice-Hall, 1976.[DJ89]N. Dershowitz and J-P. Jouannaud. Rewrite Systems, volume B,chapter 15. North-Holland, 1989.[Hin69]R. Hindley. An abstract form of the Church-Rosser theorem. J.Symbolic Logic, 34(1), 1969.372 [HMS92] C. Hankin, D. Le M etayer, and D. Sands. A calculus of Gammaprograms. Research Report DOC 92/22 (28 pages), Departmentof Computing, Imperial College, 1992. (short version to appear inthe Proceedings of the Fifth Annual Workshop on Languages andCompilers for Parallelism, Aug 1992, Springer-Verlag).[HP79]M. Hennessy and G. D. Plotkin. Full abstraction for a simpleparallel programming lanuage. In Mathematical Foundations ofComputer Science, volume 74 of LNCS, pages 108{120. Springer-Verlag, 1979.[Klo92]J. Klop. Term rewriting systems. In S. Abramsky, D. Gabbay,and T. Maibaum, editors, Handbook of Logic in Computer Science,volume II. OUP, 1992.[Lam89] L. Lamport. A simple approach to specifying concurrent systems.C. ACM, 31(1):32{45, January 1989.[Mes92] J. Meseguer. Conditional rewriting logic as a uni ed model ofconcurrency. TCS, 94, 1992.[MW91] J. Meseguer and T. Winkler. Parallel programming in maude. InIn [BM92], 1991.[Par79]D. Park. On the semantics of fair parallelism. In Abstract Soft-ware Speci cations (1979 Copenhagen Winter School Proceedings),number 86 in Lecture Notes in Computer Science, pages 504{526.Springer-Verlag, 1979.[Plo76]G. D. Plotkin. A powerdomain construction. Siam J. Comput.,5(3):452{487, September 1976.[Ros73]B. Rosen. Tree-manipulating systems and Church-Rosser theo-rems. J. ACM, 20(1):160{187, January 1973.[San93a] D. Sands. A compositional semantics of combining forms forGamma programs. In International Conference on Formal Methodsin Programming and Their Applications. Springer-Verlag, 1993.[San93b] D. Sands. Laws of parallel synchronised termination. In Theoryand Formal Methods 1993: Proceedings of the First Imperial Col-lege, Department of Computing, Workshop on Theory and FormalMethods, Isle of Thorns, UK, 1993. Springer-Verlag Workshops inComputer Science.[Sta74]J. Staples. Church-Rosser theorems for replacement systems. InAlgebra and Logic: papers from the Summer research institute ofthe Australian Mathematical Society, number 450 in Lecture Notesin Mathematics. Springer-Verlag, 1974.373 A Compositional De nition of Transition TracesTo give the compositional construction of transition traces we need to de ne theappropriate sequential and parallel composition operators over sets of traces.Notation In what follows we will adopt the following notation. If S is a set,then S will denote the set of nite sequences of elements from S, S+ will denotethe nite non-empty sequences, and S1 will denote the in nite sequences. Thepower-set is denoted by }(S).Note in particular that for a reduction relation !r, we will write (!r)to denote the nite sequence of pairs contained in !r, and not the transitive-re exive closure of the relation.Sequential Composition Sequential composition has an easy de nition. Wejust take all concatenations of the atomic traces of the components. As is usual,if and are traces, then denotes their concatenation, which is just whenis in nite. De ne the following sequencing operation for trace sets:T1 T2 = f j 2 T1; 2 T2gEnd-synchronisedMerge Not surprisingly, parallel composition is describedwith the use of a merging combinator which interleaves traces. The peculiaritiesof parallel composition are prominent in the de nition. To de ne the transitiontraces of P1 kP2 we must ensure that the traces of P1 and P2 are interleaved,but not arbitrarily; the termination step of a parallel composition requires anagreement, or synchronisation, at the point of their termination. To build upthe picture, suppose and in (U U)+ are traces of some programs P1 andP2 respectively. The set of all interleavings of and which correspond topossible executions of P1 kP2 can be given inductively by:(M;M) ] (N;N) = ( f(M;M)g if M = N;otherwise(M;M 0) ] (N;N 0) = f(M;M 0) j 6= ; 2 ] (N;N 0) g[ f(N;N 0) j 6= ; 2 (M;M 0) ] gif either 6= or 6= .To generalise the de nition to incorporate the in nite traces as well as thenite, we need to de ne the interleavings via a maximal xed-point rather thana minimal xed point as implicit in the above de nition. There are manypossible ways of presenting this construction. We choose an implicit de nitionof the required maximal xed-point:De nition 16 A functionm, from pairs of nonempty traces to sets of nonemptytraces, is an end-synchronised merger (ESM), if the following conditions aresatis ed:1. if (M;N) 2m( ; ) then = = (M;N);374 2. if 2m( ; ) then (M;N) 2m((M;N) ; )& (M;N) 2m( ; (M;N) )De nition 17 The generalised end-synchronised merge is given by the point-wise union of all end-synchronised mergers:] = [ fm( ; ) jm is an ESMgNote that ] is an ESM (this follows from the Knaster-Tarski xed-point the-orem), and therefore the largest ESM. The corresponding relation on sets oftraces will provide the denotation of parallel composition:T1 T2 = f j 2 T1; 2 T2; 2 ] gDe nition 18 The compositional atomic trace mapping Tc[[ ]] : P ! }((UU)+ [ (U U)1) is given by induction on the syntax as:Tc[[r]] = z((!r) f(M;M) jM#rg) [z((!r)1)Tc[[P1 ; P2]] = z(Tc[[P2]] Tc[[P1]])Tc[[P1 kP2]] = z(Tc[[P1]] Tc[[P2]])Soundness It is straightforward to show that Tc[[ ]] is sound with respectto behaviours of a program. The following lemma gives the basic operationalcorrespondence:Lemma 19 1. hP;Mi !M ) (M;M) 2 Tc[[P ]]2. hP;Mi ! hP 0; Ni & 2 Tc[[P 0]] =) (M;N) 2 Tc[[P ]]Proposition 20 Tc[[P ]] Tc[[Q]] =) P vo Qproof From Lemma 19 it is easy to see that Tc[[P ]] Tc[[Q]]) B(P ) B(Q).The operations used to build the compositional de nition are all monotone withrespect to subset inclusion, and so a simple induction on contexts is su cientto giveTc[[P ]] Tc[[Q]]) 8C:Tc[[C[P1]]] Tc[[C[P2]]]:Putting these together we haveTc[[P ]] Tc[[Q]] ) 8C:B(C[P1]) B(C[P2])() P1 vo P2:2B Path ConstructionsTransition Traces from Paths The transition traces can be constructedfrom the paths as follows (overloading the mapping T[[ ]]):T[[hRi]] =(TR)1 [ ((TR) f(M;M) j r 2 R;M#rg)T[[hR;R0i ]] =(TR)1 [ ((TR) f(M;M) j r 2 R0;M#rg T[[ ]])where TR = f(M;N) j r 2 R;M!rNg375 Paths Constructed CompositionallyProposition 21 The following equations uniquely characterise the paths of aprogram:paths(r) = hfrgipaths(P ; Q) = f 1hRi 2 j 1 2 paths(P ); 2 2 paths(Q); R = last( 1)gpaths(P kQ) = f 12 j 1 2 paths(P ); 2 2 paths(Q)gwherehRi hR0i = hR [R0ihR1; R2i hR0i = hR0i hR1R2i= fh(R1 [R0); R2i 0 j 0 2 hR0i ghR1; R2i 1hR01; R02i 01 = fh(R1[R01); R2i j 2 1hR01; R02i 01g[ fh(R1[R01); R02i 0 j 0 2 hR1; R2i 1 01g376 Towards Operational Semantics of Contexts inFunctional LanguagesDavid Sandsdiku, University of CopenhagenAbstractWe consider operational semantics of contexts (terms with holes) in thesetting of lazy functional languages, with the aim of providing a balancebetween operational and compositional reasoning, and a framework forsemantics-based program analysis and manipulation.IntroductionIn this note we initiate a new direction in the semantics of functional programs.The approach is based on operational semantics; our aims are to provide a oper-ational route to high-level semantic issues, such as program analysis and source-to-source transformation. We investigate the idea of giving a direct operationalsemantics to program contexts|that is, \incomplete" programs containing anumber of holes in the place of some subexpressions.The idea of providing an operational semantics for contexts has been studiedby Larsen (et al) for process algebras [Lar86][LX91]. In that setting, a contextis viewed as an action transducer, which consumes actions provided by its in-ternal processes (the holes) and produces externally observable actions. Theoperational semantics of contexts contains transitions of the formC b-a C 0which is interpreted as: by consuming action a, context C can produce action band change into C 0.We describe some initial steps towards providing an operational semanticsfor contexts in a functional setting.Functional Action-TransducersIn the process setting a context is viewed as an action transducer. What is thecorresponding notion for a context in the functional setting? In a functionallanguage, the role of an \action" is played by the observables of the language:namely a lazy data constructor| cons, true, \ ".Universitetsparken 1, 2100 K benhavn , DENMARK. e-mail: [email protected] We take a bold step, and demand that the \actions" should themselves becontexts|but not arbitrary contexts. They should be contexts built from theobservables of the language. We will call these observable contexts. Observablecontexts will be ranged-over by O, O0, etc. For some context C containingoccurrences of a single hole, if we have a transduction of the form:C O0-O C 0then we will require that: C[O] ' O0[C 0] ( )where the notation C[e] denotes context C with e placed \in the hole", and 'denotes the usual operational equivalence, extended to capture-free contexts inthe obvious way. This equation does not capture everything we expect from acontext semantics. What we expect is that the transduction should be as lazyas possible | ie. we could not have found a smaller \input" O that would havegiven the same observable \output"O0. If the language is sequential[Ber78]then we expect the semantics to give us the minimal O.For example, a context of the form if [ ] = 0 then C1 else (leaf C2)(containing multiple occurrences of a single hole), where leaf is a constructor,would have a transduction:if [ ] = 0 then C1 else (leaf C2) leaf [ ]-suc [ ] C2 suc[ ]where is context composition, so C2 suc[ ] denotes the contextC2[suc[ ]].Problems Giving a full operational semantics for contexts is di cult because:contexts can consume without producing an observable.For example if [ ] = 0 : : : can consume a constructor without necessarilybeing able to produce anything. (Similar situation would arise in Larsen'swork if one did not consider the silent action to be observable.)The number of occurrences of a given hole may increase under transduc-tion (assuming we use some mechanism like -reduction in our semantics).The number of distinct holes in a context can increase under transduction,in the presence of n-ary constructors.eg. If C1 has a single hole, and C1-cons[ ]1 [ ]2 C2 then C2 has two distinctholes.How do we treat higher-order functions?Contexts can capture variables by means of binding operatorseg. case e ofnil ) e0cons h t ) C2 .378 A Simpli ed CaseAs a rst step we study only a very simple language. We avoid almost all ofthe above problems by considering a language with the following features:First-order functionsNo binding operatorsUnary constructors and constants (= nullary constructors) as the onlyvalues.The language we consider consists of rst-order recursion equations with pos-sible pattern-matching on the rst argument (non-nested), based on unary ornullary constructors. Here are some example de nitions:add 0 x= xadd (suc y) x = suc (add y x)twice x= add x xNotation Let u range over both unary constructors (eg. suc) and constants(eg. 0,true etc.). The observable contexts are then given byO ::= [ ] j u j u OFor simplicity of presentation, we will only consider contexts with a single hole(occurring zero or more times). (We will consider an expression to be a contextwith zero occurrences of the hole.) For this restricted language, the extension tohandle polyadic contexts (contexts with several distinct holes) is straightforward(eg. borrowing the notations from [LX91]).In the context transductions, if u is a unary constructor, then we writeobservable context u [ ] as simply u, and occurrences of the trivial context [ ]will simply be omitted from the transductions, so we will write C sucC 0 inplace of C suc [ ]-[ ] C 0. If u is a constant, then u can also be denoted u(). We addthe unit expression () to the language of contexts.Language RulesWe de ne the following transductions involving terms of the language:u C uC(1)f ~C ef~x := ~Cg if f ~x = e(2)379 C1 u-O C2f C1 ~C -O efy := C2gf~x := ~C Og if f (u y) ~x = e(3)In the last rule, the composition ~C Con0 denotes the vector of contexts obtainedby composing each context in ~C with Con0. To check that the rule satis es thedesired property ( ), assume that from the antecedent we haveC1 O ' u C2then(f C1 ~C) O ' f (C1 O) (~C O)' f (u C2) (~C O)' efy := C2gf~x := ~C OgThese rules are straightforward, since they follow the \small-step" semanticsof the language (ie. they are non compositional.) We recover the ability toreason compositionally using the following context rules:Context Rules[ ] u-u [ ] (u unary)(4)[ ] u-u () (u constant)(5)C C(6)C1 O1-O01 C2 C2 O2-O02 C3C1 O1 O2-O01 O02 C3(7)C1 O1-O2 C 01 C2 O2-O3 C 02C1 C2 O1-O3 C 01 C 02(8)The last rule is the uniform rule of [Lar89][LX91], and it characterises all thetransductions of composed contexts. 380 PropertiesClosed expressions can be viewed as contexts containing zero holes. In thisway the rules above can be seen to subsume the usual large-step and small-step structural operational semantics. Suppose the large step semantics de nesan evaluation relation + (we omit the routine de nition), then we have thefollowing:e + a () e ue0 ^ (u e0 a)For example, if I is the identity function, then we have the following exampleproof:compose 2I (suc 0) (suc 0)1(suc 0) suc0I (suc 0) suc0But this is not the whole story for evaluation of closed expressions. Unlikethe structural operational semantics for +, the proof of e ue0 is not unique.An important point is that by use of the composition rule (8) we can varythe compositionality. This means that when we need to prove a property of afunction application f e, we can split this into a context f [ ] and the sub-term e,and derive the transition of the composed system in terms of these components.As an example, using the functions de ned earlier, consider the term twice (I (suc0)).We can prove that7 2twice [ ]! add [ ] [ ] Atwice [ ] suc-suc add [ ] (suc [ ])where A is the sub-proof:74,1[ ] suc-suc [ ]add [ ] [ ] -suc suc add [ ] (suc [ ]) sucadd [ ] (suc [ ])add [ ] [ ] suc-suc add [ ] (suc [ ])and so using the composition rule we obtain:8twice [ ] suc-suc add [ ] (suc [ ]) I (suc 0) suc0twice (I (suc 0)) sucadd 0 (suc 0)Note that in the example, because of the use of the compositional rule, there isonly one sub-proof for the expression I (suc 0), whereas under the standardcall by name SOS we would have two sub-proofs.381 We conjecture that proofs which always treat function calls composition-ally in this way (we need to generalise to n-ary contexts to do this for n-aryfunctions) have size proportional to the number of evaluation steps required un-der standard call-by-need computation. This form of proof corresponds to thedemand function used in Bjerner and Holmstrom's call-by-need time-analysis[BH89].Further WorkIn the remainder of this paper we consider the directions for further develop-ment, which mostly concern tackling the problems of richer languages.Polyadic ContextsThe above semantics is easily extended to handle polyadic contexts, but if wego beyond just unary constructors then the extension quickly becomes nota-tionally complex. The problem is that consuming an observable context maygive rise to several new holes, and producing a observable context means thata transduction may result in several contexts. Our proposal for dealing withthese problems is to adopt a di erent kind of transduction. Instead of requiringthat a transductionC O0-O C 0implies that C O ' O0 C 0, the requirement is thatC O ' O0 C 0 OIn this way the \type" of the holes in the derived context C 0 is the same as in C.The addition of n-ary constructors means that C 0 must be a vector of contexts.With this interpretation of context transductions, the uniform compositionrule now has the form: C1 O1-O2 C 01 C2 O2-O3 C 02C1 C2 O1-O3 C 01 O2 C 02Higher-Order FunctionsIf we focus on contexts which cannot capture variables, then higher-order func-tions can be thought of as introducing an extra hole. We anticipate that to dealwith polyadic contexts in their full generality we will need a notation along thelines of Martin-Lo f's theory of arities, so (X)C will denote a context with asingle hole named X . Then a lambda-context could have a transition(~Y ) y:C (X)(~Y )C X 62 ~Y :382 Abstract Contexts and Relativised EquivalenceTwo natural directions are to consider static analysis problems, and notions ofrelativised equivalence. We should consider:Bisimulation-like characterisations of context-equivalence along the linesof [Abr90][How89].Relativised equivalences 'C :e 'C e0 () C[e] ' C[e0]Semantics for abstract observable contexts (whose meaning is a set ofcontexts);A notion of environment as a dual to (abstract) observable contexts, thusgeneralising the demand semantics of [San93].These points should enable an operational formalisation of context analysis ofthe form of [Hug87][WH87].Guarded ContextsIn a somewhat orthogonal study we have considered context semantics for arestricted class of contexts (a form of guarded contexts) for a higher-order lan-guage with binding operators and arbitrary lazy constructors. The principletechnical problem in this context semantics is to handle holes which occur un-der bound variables. This operational semantics of contexts nds immediateapplication to the problem of correct folding in program transformation. Italso provides a simple form of \applicative bisimulation up to context" prooftechnique, a la Sangiorgi [San94].Acknowlegement An earlier investigation of the subject of this note wasundertaken together with Sebastian Hunt a few years ago. Our attempt failed,because we were over-ambitious in trying to give only compositional rules. Buta number of ideas crystalised from our attempt, and have in uenced the currentdevelopment. One idea in particular |that the \actions" should themselves becontexts|is due to Sebastian.References[Abr90] S. Abramsky. The lazy lambda calculus. In D. Turner, editor, ResearchTopics in Functional Programming, pages 65{116. Addison Wesley,1990.[Ber78] G. Berry. Stable models of typed lambda calculi. In 5th Coll. onAutomata Languages and Programming, LNCS 62. Springer-Verlag,1978.383Contexts and Relativised EquivalenceTwo natural directions are to consider static analysis problems, and notions ofrelativised equivalence. We should consider:Bisimulation-like characterisations of context-equivalence along the linesof [Abr90][How89].Relativised equivalences 'C :e 'C e0 () C[e] ' C[e0]Semantics for abstract observable contexts (whose meaning is a set ofcontexts);A notion of environment as a dual to (abstract) observable contexts, thusgeneralising the demand semantics of [San93].These points should enable an operational formalisation of context analysis ofthe form of [Hug87][WH87].Guarded ContextsIn a somewhat orthogonal study we have considered context semantics for arestricted class of contexts (a form of guarded contexts) for a higher-order lan-guage with binding operators and arbitrary lazy constructors. The principletechnical problem in this context semantics is to handle holes which occur un-der bound variables. This operational semantics of contexts nds immediateapplication to the problem of correct folding in program transformation. Italso provides a simple form of \applicative bisimulation up to context" prooftechnique, a la Sangiorgi [San94].Acknowlegement An earlier investigation of the subject of this note wasundertaken together with Sebastian Hunt a few years ago. Our attempt failed,because we were over-ambitious in trying to give only compositional rules. Buta number of ideas crystalised from our attempt, and have in uenced the currentdevelopment. One idea in particular |that the \actions" should themselves becontexts|is due to Sebastian.References[Abr90] S. Abramsky. The lazy lambda calculus. In D. Turner, editor, ResearchTopics in Functional Programming, pages 65{116. Addison Wesley,1990.[Ber78] G. Berry. Stable models of typed lambda calculi. In 5th Coll. onAutomata Languages and Programming, LNCS 62. Springer-Verlag,1978.383 [BH89] B. Bjerner and S. Holmstrom. A compositional approach to time anal-ysis of rst order lazy functional programs. In Functional ProgrammingLanguages and Computer Architecture, conference proceedings, pages157{165. ACM press, 1989.[How89] D. J. Howe. Equality in lazy computation systems. In Fourth annualsymposium on Logic In Computer Science, pages 198{203. IEEE, 1989.[Hug87] R. J. M. Hughes. Backwards analysis of functional programs. ResearchReport CSC/87/R3, University of Glasgow, March 1987.[Lar86] K. G. Larsen. Context-Dependent Bisimulation Between Processes.PhD thesis, Department of Computing, University of Edinburgh, 1986.[Lar89] K. G. Larsen. Compositinal theories based on an operational semanticsof contexts. In Stepwise Re nement of Distributed Systems: Models,Formalisms, Correctness, number 430 in LNCS. Springer-Verlag, 1989.[LX91] K. G. Larsen and L. Xinxin. Compositinality through an operationalsemantics of contexts. J. Logic and Computation, 1(6):761{795, 1991.[San93] D. Sands. A nave time analysis and its theory of cost equivalence.TOPPS report D-173, DIKU, 1993. To appear: Journal of Logic andComputation, 1995.[San94] D. Sangiorgi. On the bisimulation proof method. Technical report,University of Edinburgh, 1994.[WH87] P. Wadler and R. J. M. Hughes. Projections for strictness analysis. In1987 Conference on Functional Programming and Computer Architec-ture, pages 385{407, Portland, Oregon, September 1987.384 An Approach to the Category of Net ComputationsVladimiro SassoneBRICS { Computer Science Dept., University of AarhusAbstract. We introduce the notion of strong concatenable process as a re-nement of concatenable processes [3] which can be expressed axiomatically viaa functor Q[ ] from the category of Petri nets to an appropriate category of sym-metric strict monoidal categories, in the precise sense that, for each net N , thestrong concatenable processes of N are isomorphic to the arrows of Q[N ]. In ad-dition, we identify a core ection right adjoint to Q[ ] and characterize its repleteimage, thus yielding an axiomatization of the category of net computations.IntroductionPetri nets, introduced by C.A. Petri in [8] (see also [10]), are unanimously con-sidered among the most representative models for concurrency, since they area fairly simple and natural model of concurrent and distributed computations.However, Petri nets are, in our opinion, not yet completely understood.Among the semantics proposed for Petri nets, a relevant role is played by thevarious notions of process [9, 4, 1], whose merit is to provide a faithful accountof computations involving many di erent transitions and of the causal connec-tions between the events occurring in a computation. However, process models,at least in their standard forms, fail to bring to the foreground the algebraicstructure of nets and their computations. Since such a structure is relevantto the understanding of nets, they fail, in our view, to give a comprehensiveaccount of net behaviours.The idea of looking at nets as algebraic structures [10, 7, 13, 14, 2] has beengiven an original interpretation by considering monoidal categories as a suit-able framework [6]. In fact, in [6, 3] the authors have shown that the semanticsof Petri nets can be understood in terms of symmetric monoidal categories|where objects are states, arrows processes, and the tensor product and the arrowcomposition model, respectively, the operations of parallel and sequential com-position of processes. In particular, [3] introduced concatenable processes|theslightest variation of Goltz-Reisig processes [4] on which sequential compositioncan be de ned|and structured the concatenable processes of a Petri net N asthe arrows of the symmetric strict monoidal category P [N ]. This yields an ax-iomatization of the causal behaviour of a net as an essentially algebraic theoryand thus provides a uni cation of the process and the algebraic view of netcomputations.However, also this construction is somehow unsatisfactory, since it is notfunctorial. More strongly, given a morphism between two nets, i.e., a simulationbetween them, it may not be possible to identify a corresponding monoidalfunctor between the respective categories of computations. This fact, besides* Basic Research in Computer Science, Centre of the Danish National Research Foundation.Supported by EU Human Capital and Mobility grant ERBCHBGCT920005.385 showing that our understanding of the algebraic structure of Petri nets is stillincomplete, prevents us from identifying the category (of the categories) of netcomputations, i.e., from axiomatizing the behaviour of Petri nets `in the large'.This paper presents an analysis of this issue and a solution based on thenew notion of strongly concatenable processes, introduced in Section 4. Theseare a slight re nement of concatenable processes which are still rather closeto the standard notion of process: they are Goltz-Reisig processes whose min-imal and maximal places are linearly ordered. In the paper we show that,similarly to concatenable processes, also this new notion can be axiomatizedas an algebraic construction on N by providing an abstract symmetric strictmonoidal category Q[N ] whose arrows are in one-to-one correspondence withthe strongly concatenable processes of N . The category Q[N ] constitutes ourproposed axiomatization of the behaviour of N in categorical terms.Corresponding directly to the linear ordering of preand post-sets whichcharacterizes strongly concatenable processes, the key feature of Q[ ] is that,di erently from P [ ], it associates to the net N a monoidal category whoseobjects form a free non-commutative monoid. The reason for renouncing tocommutativity when passing from P [ ] to Q[ ], a choice that at rst may seemodd, is explained in Section 2, where the following negative result is proved:under very reasonable assumptions, no mapping from nets to symmetric strictmonoidal categories whose monoids of objects are commutative can be liftedto a functor, since there exists a morphism of nets which cannot be extendedto a monoidal functor between the appropriate categories. Thus, abandoningthe commutativity of the monoids of objects and considering strings as repre-sentatives of multisets, i.e., considering strongly concatenable processes, seemto be a choice forced upon us by the aim of a functorial algebraic semanticsof nets. As a consequence of this choice, any transition of N has many corre-sponding arrows in Q[N ], actually one for each linearization of its pre-set andof its post-set. However, such arrows are `related' to each other by a naturalitycondition, in the precise sense that, when collected together, they form a natu-ral transformation between appropriate functors. This naturality axiom is thesecond relevant feature of Q[ ] and it is actually the key to keep the computa-tional interpretation of the new category Q[N ], i.e., the strongly concatenableprocesses, surprisingly close to that of P [N ], i.e., the concatenable processes.Concerning our main issue, viz. functoriality, in Section 3 we introducea category TSSMC of symmetric strict monoidal categories with free non-commutative monoids of objects, called symmetric Petri categories, whose ar-rows are equivalence classes|accounting for our view of strings as representa-tives of multisets|of those symmetric strict monoidal functors which preservesome further structure related to nets, and we show that Q[ ] is a functor fromPetri, a rich category of nets introduced in [6], to TSSMC . In addition, weprove that Q[ ] has a core ection right adjoint N [ ]: TSSMC ! Petri. Thisimplies, by general reasons, that Petri is equivalent to an easily identi ed core-ective subcategory of TSSMC , namely the replete image ofQ[ ]. The categoryTSSMC , together with the functors Q[ ] and N [ ], constitutes our proposed ax-iomatization (`in the large') of Petri net computations in categorical terms.386 Although this contribution is a rst attempt towards the aims of a functorialalgebraic semantics for nets and of an axiomatization of net behaviours `in thelarge', we think that the results given here help to deepen the understandingof the subject. We remark that the re nement of concatenable processes intostrongly concatenable processes is similar and comparable to the one whichbrought from Goltz-Reisig processes to them, and that the result of Section 2makes strongly concatenable processes `unavoidable' if a functorial constructionis desired. In addition, from the categorical viewpoint, our approach is quitenatural, since it is the one which simply observes that multisets are equivalenceclasses of strings and then takes into account the categorical paradigm, followingwhich one always prefers to add suitable isomorphisms between objects ratherthan considering explicitly equivalence classes of them. Finally, concerning theuse of category theory in semantics, and in particular in this paper, it may beappropriate to observe here that the categorical framework made it possible todiscover and amend an `anomaly' of P [ ] signi cant and of general relevancewhich could have not been noticed in other frameworks.Due to the extended abstract nature of this paper, most of the proofs areomitted. Some preliminary related results appear also in [11].Notation. When dealing with a category C in which arrows are meant to represent compu-tations, in order to stress its computational interpretation, we write arrow composition fromleft to right, i.e., in the diagrammatic order, and we denote it by ; . The reader is referredto [5] for the categorical concepts used.Acknowledgements. I wish to thank Jose Meseguer and Ugo Montanari to whom I amindebted for several discussions on the subject. Thanks to Mogens Nielsen, Claudio Hermidaand Jaap Van Oosten for their valuable comments on an early version of this paper.1 Concatenable ProcessesIn this section we recall the notion of concatenable processes [3].Notation. Given a set S, we denote by S the set of nite multisets of S, i.e., the set of allfunctions from S to the set ! of natural numbers which yield nonzero values only on nitelymany s 2 S. We recall that S is a commutative monoid, actually the free commutativemonoid on S, under the operation of multiset union, in the following denoted by , with unitelement the empty multiset 0.Definition 1.1 (Petri Nets)A Petri net is a structure N = (@0N ; @1N :TN ! S N), where TN is a set oftransitions, SN is a set of places, and @0N and @1N are functions.A morphism of Petri nets from N0 to N1 is a pair hf; gi, where f :TN0 ! TN1is a function and g:S N0 ! S N1 is a monoid homomorphism such that hf; girespects source and target, i.e., @iN1 f = g @iN0 , for i = 0; 1.This de nes the category Petri of Petri nets.This describes a Petri net precisely as a graph whose set of nodes is afree commutative monoid, i.e., the set of nite multisets on a given set ofplaces. The source and target of an arc, here called a transition, are meant torepresent, respectively, the markings consumed and produced by the ring ofthe transition.387 Definition 1.2 (Process Nets and Processes)A process net is a nite, acyclic net such that for all t 2 T , @0(t) and @1(t)are sets (as opposed to multisets), and for all t0 6= t1 2 T , @i(t0)\@i(t1) = ?,for i = 0; 1.Given N 2 Petri, a process of N is a morphism : ! N , where is aprocess net and is a net morphism which maps places to places (as opposedto morphisms which map places to markings).We consider as identical process nets which are isomorphic. Consequently,we shall make no distinction between two processes : ! N and 0: 0 ! Nfor which there exists an isomorphism ': ! 0 such that 0 ' = .The equivalence of the following de nition of P [N ] with the original onein [3] has been proved in [12]. The reader is referred to the cited works for amore explicit description of P [N ], a wider discussion, and for related examples.Definition 1.3The category P [N ] is the monoidal quotient of F(N), the symmetric strictmonoidal category whose monoid of objects is S N and whose arrows are freelygenerated from the transitions of N , modulo the axiomsa;b = ida b if a; b 2 SN and a 6= b;t; (ida;a id) = tif t 2 TN and a 2 SN ;(ida;a id); t = tif t 2 TN and a 2 SN ;where is the symmetry isomorphism of F(N).The arrows of P [N ] have a nice computational interpretation as concatenableprocesses, a slight re nement of the classical notion of process consisting of asuitable labelling of the minimal and the maximal places of process nets whichdistinguishes among the di erent instances of a place in a process of N . Therole of the symmetries is to regulate the ow of causality between subprocessesby permuting instances of places appropriately, i.e., by exchanging causes. Inthis view, the rst axiom says that permuting di erent places does not changethe causal relationships, and the remaining two say that the same happens whenpermuting places in the preand in the post-set of a transition. Using the labels,it is easy to de ne an operation of concatenation of concatenable processes and,thus, a category CP[N ] whose objects are the multisets S N and whose arrowsare the concatenable processes of N . It has been proved in [3] that CP[N ] is asymmetric strict monoidal category and that the following result holds.Theorem 1.4CP[N ] and P [N ] are isomorphic.2 A Negative Result about FunctorialityAmong the primary requirements usually imposed on constructions like P [ ]there is that of functoriality. One of the main reasons supporting the choice388 of a categorical treatment of semantics is the need of specifying further thestructure of the systems under analysis by giving explicitly the morphisms or,in other words, by specifying how the given systems simulate each other. This,in turn, means to choose precisely what the relevant (behavioural) structureof the systems is. It is then clear that such morphisms should be preservedat the semantic level. In our case, the functoriality of P [ ] means that if Ncan be mapped to N 0 via a morphism hf; gi, which by the very de nition ofnet morphisms implies that N can be simulated by N 0, there must be a way,namely P [hf; gi], to see the processes of N as processes of N 0. However, thisis not possible for P [ ]. The problem, as illustrated by the following example,is due to the rst axiom in De nition 1.3 which, on the other hand, is exactlywhat makes P [N ] capture quite precisely the notion of processes of N .Example 2.1Consider the nets N and N in the picture below, where we use the standardgraphical representation of nets in which circles are places, boxes are transitions,and sources and targets are directed arcs. We have SN = fa0; a1; b0; b1g and TNconsisting of the transitions t0: a0 ! b0 and t1: a1 ! b1, while S N = fa; b0; b1gand T N contains t0: a! b0 and t1: a! b1.a0a1at0t1t0t1b0b1b0b1+*()+*()+*()8888+*() +*()+*() +*()Consider now the net morphism hf; gi where f(ti) = ti, g(ai) = a and g(bi) = bi,for i = 0; 1. We claim that hf; gi cannot be extended to a monoidal functorP [hf; gi] from P [N ] to P [ N ]. Suppose in fact that F is such an extension. Then,it must be F(t0 t1) = F(t0) F(t1) = t0 t1. Moreover, since t0 t1 = t1 t0,we would havet0 t1 = F(t1 t0) = t1 t0;which is impossible since the leftmost and the rightmost terms above are dif-ferent processes in P [ N].Formally speaking, the problem is that the category of symmetries sittinginside P [N ], say SymN , is not free. Moreover, it is easy to verify that as soon asone imposes axioms on P [N ] which guarantee to get a functor, one annihilatesall the symmetries and, therefore, destroys the ability of P [N ] of dealing withcausality. It is important to observe that it would be de nitely meaninglessto try to overcome the problem simply by dropping from Petri the morphismswhich `behave badly': the morphism hf; gi of Example 2.1, for instance, isclearly a simulation and, as such, it should de nitely be allowed by any seriousattempt to formulate a de nition of net morphisms. The following result showsthat the problem illustrated in Example 2.1 is serious, actually deep enough toprevent any naive modi cation of P [ ] from being functorial.389 Theorem 2.2Let X [ ] be a function which assigns to each net N a symmetric strict monoidalcategory whose monoid of objects is commutative and contains the places ofN . Suppose that the group of symmetries at any object of X [N ] is nite andsuppose that there exists a net N with a place a 2 N such that, for each n > 1,we have that the components at (na; na) of the symmetry isomorphism of X [N ]is not an identity. Then, there exists a Petri net morphism hf; gi:N0 ! N1which cannot be extended to a symmetric strict monoidal functor from X [N0]to X [N1].Proof. (Sketch.) Let N 0 be a net such that, for each n, we havec0na;na 6= id ,where c0 is the symmetry natural isomorphismofX [N 0], and let N be a net with twodistinct places a and b and with no transitions, and let c0 be the symmetry naturalisomorphism of X [N ]. Since the group of symmetries at ab is nite, there is a cyclicsubgroup generated by ca;b, i.e., there exists k > 1, the order of the subgroup, suchthat(ca;b)k = id and(ca;b)n 6= id for any 1 n < k. Let p be any prime numbergreater than k. Then, exploiting general properties of monoidal categories andreasoning as in Example 2.1, one sees that the Petri net morphism hf; gi:N ! N 0,where f is the function ? ! TN 0 and g is the monoid homomorphism such thatg(b) = (p 1)a and g is the identity on the other places of N , cannot be extendedto a symmetric strict monoidal functor F:X [N ]! X [N 0].XThe contents of the previous proposition may be restated in di erent termsby saying that in the free category of symmetries on a commutative monoidM there are in nite homsets. This means that dropping axiom a;b = ida b inthe de nition of P [N ] causes an `explosion' of the structure of the symmetries.More precisely, if we omit that axiom we can nd some object u such thatthe group of symmetries on u has in nite order. Of course, since symmetriesrepresent causality, and as such they are integral parts of processes, this makesthe category so obtained completely useless for the application we have in mind.The hypothesis of Theorem 2.2 can be certainly weakened in several ways, atthe expense of complicating the proof. However, we avoided such complicationssince the conditions stated above are already weak enough if one wants toregard X [N ] as a category of processes of N . In fact, since places representthe atomic bricks of which states are built, one needs to consider them inX [N ], since symmetries regulate the ` ow of causality', there will be cna;nadi erent from the identity, and since in a computation we can have only nitelymany `causality streams', there will not be categories with in nite groups ofsymmetries. Therefore, the given result means that there is no chance to havea functorial construction of the processes of N along the lines of P [N ] whoseobjects form a commutative monoid.3 The Category Q[N ]In this section we introduce the symmetric strict monoidal categoryQ[N ] whichis meant to represent the processes of the Petri net N and which supports a390 functorial construction. This will allow us to characterize the category of thecategories of net behaviours, i.e., to axiomatize net behaviours `in the large'.Theorem 2.2 shows that, necessarily, there is a price to be payed. Here,the idea is to renounce to the commutativity of the monoids of objects. Moreprecisely, we build the arrows of Q[N ] starting from the Sym N , the `free' cat-egory of symmetries over the set SN of places of N . This choice makes eachtransition of N have many corresponding arrows in Q[N ]; however, the arrowsof Q[N ] which di er only by being instances of the same transition are linkedtogether by a `naturality' condition which guarantees that Q[N ] remains closeto the category P [N ] of concatenable processes. Namely, the arrows of Q[N ]correspond to Goltz-Reisig processes in which the minimal and the maximalplaces are linearly ordered.Similarly to SymN , Sym N serves a double purpose: from the categoricalpoint of view it provides the symmetry isomorphism of a symmetric monoidalcategory, while from a semantic perspective it regulates the ow of causal de-pendency. Generally speaking, a symmetry in Q[N ] should be interpreted as a`reorganization' of the tokens in the global state of the net which, when reorga-nizing multiple instances of the same place, yields a exchange of causes exactlyas SymN does for P [N ].Notation. In the following, we use S to indicate the set of ( nite) strings on set S, morecommonly denoted by S . In the same way, we use to denote string concatenation, while 0denotes the empty string. As usual, for u 2 S , we indicate by juj the lenght of u and by uiits i-th element.Definition 3.1 (The Category of Permutations)Let S be a set. The category Sym S has for objects the strings S and an arrowp: u ! v if and only if p is a permutation of juj elements, and v is the stringobtained by applying the permutation p to u, i.e., vp(i) = ui.Arrows composition in Sym S is obviously given by the product of permutations,i.e., their composition as functions, here and in the following denoted by ; .Graphically, we represent an arrow p: u ! v in Sym S by drawing a linebetween ui and vp(i), as for example in Figure 1. Of course, it is possibleto de ne a tensor product on Sym S together with interchange permutationswhich make it a symmetric monoidal category (see also Figure 1 where is thepermutation f1! 2; 2! 1g).Definition 3.2 (Operations on Permutations)Given the permutations p: u! v and p0: u0 ! v0 in Sym S their parallel compo-sition p p0: u u0 ! v v0 is the permutation such thati 7! ( p(i)if 0 < i jujp0(i juj) + juj if juj < i juj+ ju0jGiven a permutation of m elements and the strings ui 2 S , i = 1; : : : ; m,the interchange permutation (u1; : : : ; um) is the permutation p such thatp(i) = i h 1Xj=1 juj j+ X(j)< (h) juj j if h 1Xj=1 juj j < ihXj=1 juj j:391 a a a b ba a a b b(((((((((((('''''a a ba a b((((((= a a a b b a a ba a a b b a a b(((((((((((('''''((((((a a b; a a a b b! = a a b a a a b ba a a b b a a bIIIIIIIIIIIIIIIIIIIIIIIIIIIFigure 1: The monoidal structure of Sym SIt is easy to see that extends to a functor : Sym S Sym S ! Sym Smaking Sym S a strict monoidal category. Moreover, the family of interchangepermutations = f (u; v)gu;v2Sym S provides the symmetry isomorphism whichmakes Sym S a symmetric strict monoidal category.Theorem 3.3Let S be a set, let C be a symmetric strict monoidal category and let F bea function from S to the set of objects of C. Then, there exists a uniquesymmetric strict monoidal functor F: Sym S ! C extending F .The preceding result proves that the mapping S 7! Sym S extends to aleft adjoint functor from Set, the category of sets, to SSMC, the category ofsymmetric strict monoidal categories. Equivalently, Sym S is the free symmetricstrict monoidal category on the set S, which is the key point about Sym S .In the following, given a string u 2 S , let M(u) denote the multisetcorresponding to u, and, given a net N , let Sym N the category Sym SN .Definition 3.4 (The category Q[N ])Let N be a net in Petri. Then Q[N ] is the category which includes Sym Nas subcategory and has as additional arrows those de ned by the followinginference rules:t:M(u)!M(v) in TNtu;v : u! v in Q[N ]: u! v and : u0 ! v0 in Q[N ]: u u0 ! v v0 in Q[N ]: u! v and : v! w in Q[N ]; : u! w in Q[N ]plus the axioms expressing the fact that Q[N ] is a symmetric strict monoidalcategory with symmetry isomorphism , and the following axiom involvingtransitions and symmetries.p ;tu0;v0 = tu;v ; q where p: u! u0 in Sym N and q: v ! v0 in Sym N : ( )Exploiting the freeness of Sym N , it is easy to prove the following completelyaxiomatic description of Q[N ], which can be useful in many contexts.392 Proposition 3.5Q[N ] is (isomorphic to) the category C whose objects are the elements of S Nand whose arrows are generated by the inference rulesu 2 S Nidu: u! u in Cu; v in S Ncu;v : u v ! v u in C t:M(u)!M(v) in TNtu;v : u! v in C: u! v and : u0 ! v0 in C: u u0 ! v v0 in C : u! v and : v! w in C; : u! w in Cmodulo the axioms expressing that C is a strict monoidal category, namely,; idv = = idu; and ( ; ); = ; ( ; );() = () and id0 = = id0;idu idv = idu v and (0); (0) = ( ; ) ( 0; 0);the latter whenever the righthand term is de ned, the following axioms express-ing that C is symmetric with symmetry isomorphism ccu;v w = (cu;v idw); (idv cu;w);cu;u0 ; () = ();cv;v0 for : u! v; : u0 ! v0;cu;v; cv;u = idu v ;and the following axiom corresponding to axiom ( )p ;tu0 ;v0 ; q = tu;v where p: u! u0 and q: v0! v are symmetries:We show next that Q[ ] can be lifted to a functor from the category ofPetri nets to an appropriate category of symmetric strict monoidal categoriesand equivalence classes of symmetric strict monoidal functors. The role ofsuch an equivalence is to take into account that we look at the strings of S Nas concrete representatives of the multisets of S N and, therefore, we want toconsider perfectly equal those functors which di er only by picking di erent,yet compatible, linearizations of multisets.Definition 3.6 (Symmetric Petri Categories)A symmetric Petri category is a symmetric strict monoidal category C in SSMCwhose monoid of objects is the free monoid S for some set S.For any pair C and D of symmetric Petri categories, consider the binaryrelation RC;D on the symmetric strict monoidal functors from C to D de nedas F RC;D G if and only if there exists a monoidal natural isomorphism : F = Gwhose components are all symmetries. Clearly, RC;D is an equivalence relationand the family R = fRC;DgC;D2SSMC is a congruence with respect to functorcomposition. Therefore, the following de nition makes sense.Definition 3.7 (The category SSMC )Let SSMC be the quotient of the full subcategory of SSMC consisting of thesymmetric Petri categories modulo the congruence R.393 Theorem 3.8 (Q[ ]: Petri! SSMC )Q[ ] extends to a functor from Petri to SSMC .Proof. (Sketch.) Let hf; gi:N0 ! N1 be a morphism of Petri nets. In order de neQ[hf; gi] we need to be able to embed N inQ[N ]. To this end, consider any functioninN1 :S N1 ! S N1 such that M(inN1 ( )) = . Since g is a monoid homomorphismfrom the free monoid S N0 to S N1 , it corresponds to a unique function g0 from SN0to S N1 , whence we obtain ĝ = inN1 g0:SN0 ! S N1 , i.e., a function from SN0 tothe set of objects of Q[N1]. Then, from Theorem 3.3, we have the symmetric strictmonoidal functor F0: SymSN0 !Q[N1]. Finally, we extend F0 to a functor Q[hf; gi]fromQ[N0] to Q[N1] by considering the symmetric strict monoidal functor F whichcoincides with F0 on SymN0 and maps tu;v:u ! v to f(t)F(u);F(v): F(u) ! F(v).Since monoidal functors map symmetries to symmetries, and since f(t) is transitionof N1, it follows immediately that F preserves axiom ( ), i.e., that F is well de ned.Moreover, since a di erent choice of inN1 would clearly give a functor G such thatF R G, we have that Q[ ] does not depend on inN1 . It is easy to check that thisde nition makes Q[ ] into a functor.XHowever, the category SSMC is still too general for our purpose. In par-ticular, it is easily noticed that Q[ ] is not full. This signi es that SSMC hastoo little structure to represent net behaviours precisely enough; equivalently,since the structure of the objects of a category C is `encoded' in the morphismsof C, it signi es that the morphisms of SSMC do not capture the structureof symmetric Petri categories precisely enough. Speci cally, the transitions,which are de nitely primary components of nets, and as such are treated bythe morphisms in Petri, have no corresponding notion in SSMC : we need toidentify such a notion and re ne the choice of the category of net computationsaccordingly.The key to accomplish our task is the following observation about axiom ( )in De nition 3.4: as already mentioned, it simply expresses that the collectionof the arrows tu;v ofQ[N ], for t 2 TN and u; v 2 S N , is a natural transformation.Namely, for C a symmetric Petri category with objects S , and a multisetin S , let SymC; be the subcategory of C consisting of those objects u 2 Ssuch that M(u) = and the symmetries between them, and let inC; be theinclusion of SymC; in C. Then, for ; 0 2 S , one obtains a pair of parallelfunctors C; and C; 0 by composing inC; and inC; 0 respectively with the rstand with the second projection of SymC; SymC; 0 .SymC;SymC; SymC; 0CSymC; 0 inC;MMMMMMM&&0pppppp77 C;//C; 0//1NNNNNN''inC; 0qqqqqqq88It follows directly from the de nitions that, when C is Q[N ], axiom ( ) statesexactly that, for all t: ! 0 2 TN , the set ftu;v j M(u) = ;M(v) = 0g is anatural transformation from Q[N ]; to Q[N ]; 0 .394 A further very relevant property of the transitions of N when consideredas arrows of Q[N ] is that of being decomposable as a tensor only trivially andas a composition only by means of symmetries. This is easily captured by thefollowing notion of primitive arrow.Definition 3.9 (Primitive Arrows)Let C be a symmetric Petri category. An arrow in C is primitive ifi) is not a symmetry;ii) = ; impliesis a symmetry and is primitive, or viceversa;iii) =implies= id0 and is primitive, or viceversa.A simple inspection of De nition 3.4 shows that the only primitive arrowsin Q[N ] are the arrows tu;v , for t:M(u) ! M(v) a transition of N . As aconsequence, the natural transformations : Q[N ]; ! Q[N ]; 0 whose compo-nents are primitive are in one-to-one correspondence with the transitions of N .Following the usual categorical paradigm, we then use the properties that char-acterize the transitions of N in Q[N ], expressed in abstract categorical terms,to de ne the notion of transition in any symmetric Petri category.Definition 3.10 (Transitions of Symmetric Petri Categories)Let C be a symmetric Petri category and let S be its monoid of objects. Atransition of C is a natural transformation : C; ! C; 0 , for ; 0 in S ,whose components u;v are primitive arrows of C.It is clear now what the extra structure required in SSMC is: transitionsmust be preserved by morphisms of symmetric Petri categories. Formally, forC and D in SSMC and F:C ! D in SSMC, F respects transitions if, for eachtransition : C; ! C; 0 of C, there exists a transition 0: D;! D;0 of Dsuch that F( u;v) = 0F(u);F(v) for all (u; v) in SymC; SymC; 0 ; in this case, wesay that 0 corresponds to via F.The following lemma shows that a symmetric strict monoidal functor whichpreserves transitions de nes a mapping between sets of transitions and that,moreover, this property extends to the arrows of SSMC . It follows immediatelythat De nition 3.12 is well given.Lemma 3.11If F:C! D respects transitions, then for any transition of C, there exists aunique transition 0 of D which corresponds to via F.If F R G, then F respects transitions if and and only if G does so, and then 0corresponds to via F if and only if 0 corresponds to via G.Definition 3.12 (Symmetric Petri Morphisms and the Category TSSMC )A morphism of symmetric Petri category is an arrow in SSMC which respectstransitions. We shall use TSSMC denote the (lluf) subcategory of SSMCwhose arrows are the morphisms of symmetric Petri categories.Finally, it is easy to prove that Q[ ] is actually a functor to TSSMC .395 Proposition 3.13 (Q[ ]: Petri! TSSMC )The functor Q[ ] restricts to a functor from Petri to TSSMC .Proof. It is enough to verify that, for any morphism hf; gi:N0 ! N1 in Petri, arepresentative F of Q[hf; gi] respects transitions. This follows at once, since f is afunction TN0 ! TN1 , F(tu;v) = f(t)F(u);F(v), and the transitions ofQ[Ni] are exactlythe natural transformations ftu;v j M(u) = ;M(v) = 0g for t: ! 0 2 TNi . XInterestingly enough, we can identify a functor from TSSMC to Petri whichis a core ection right adjoint to Q[ ]. It is worth remarking that this answersto a possible legitimate doubt about the category TSSMC : in principle, infact, the functoriality of Q[ ] could be due to a very tight choice of the tar-get category, e.g., the congruence R could induce too many isomorphisms ofcategories and Q[ ] make undesirable identi cations of nets. The existence ofa core ection right adjoint to Q[ ] is, of course, the best possible proof of theadequacy of TSSMC : it implies that Petri is embedded in it fully and faithfullyas a core ective subcategory. This result supports our claim that TSSMC isan axiomatization of the category of net computations.Theorem 3.14 (Q[ ] a N [ ]: Petri! TSSMC )Let C be a symmetric Petri category, and let S be its monoid of objects.De ne N [C] to be the Petri net (@0; @1:T ! S ), whereT is the set of transitions : C; ! C; 0 of C;@0( : C; ! C; 0) =and @1( : C; ! C; 0) = 0.Then, N [ ] extends to a functor TSSMC ! Petri which is right adjoint to Q[ ].In addition, since the unit is an isomorphism, the adjunction is a core ection.Proof. For any symmetric Petri category C, there is a (unique) symmetric strictmonoidal functor "C:QN [C] ! C which is the identity on the objects and whichsends the component at (u; v) of the transition : ! 0 of N [C] to the componentu;v of the natural transformation : C; ! C; 0 : SymC; SymC; 0 ! C. Since itclearly preserves transitions, we have that "C is a (representative of a) morphism ofsymmetric Petri categories. It is not di cult to prove that "C enjoys the couniversalproperty making it the counit of the adjunction. The unit N :N !NQ[N ] is themorphism hf; idi, where f sends t 2 TN to ftu;vg 2 TNQ[N ] , which is an iso. XWe end this section by identifying the replete image of Q[ ] in TSSMC , i.e.,the full subcategory of TSSMC consisting of those symmetric Petri categoriesisomorphic to Q[N ], for some N in Petri.Theorem 3.15 (Petri = PSSMC)Let PSSMC be the full subcategory of TSSMC consisting of those symmetricPetri categories C whose arrows can be generated by tensor and compositionfrom symmetries, and components of transitions of C, uniquely up to the axiomsof symmetric strict monoidal categories, i.e., the axioms in Proposition 3.5, andthe naturality of transitions, i.e., axiom ( ).Then, PSSMC and Petri are equivalent.Proof. By general results in category theory, it is enough to show that C belongsto PSSMC if and only if "C:QN [C]! C is an isomorphism, which is easy. X396 4 Strongly Concatenable ProcessesIn this section we introduce a slight re nement of concatenable processes and weshow that they are abstractly represented by the arrows of the category Q[N ].In other words, we nd a process-like representation for the arrows of Q[N ].This yields a functorial construction for the category of the processes of a netN .Definition 4.1 (Strongly Concatenable Processes)Given a petri net N in Petri, a strongly concatenable process of N is a tuple( ; `; L) where : ! N is a process of N , and `: min( )! f1; : : : ; jmin( )jgand L: max( ) ! f1; : : : ; jmax( )jg are isomorphisms, i.e., total orderings of,respectively, the minimal and the maximal places of .An isomorphism of strongly concatenable processes is an isomorphism of theunderlying processes which, in addition, preserves the orderings ` and L. Asusual, we identify isomorphic strongly concatenable processes.As in the case of concatenable processes, it is easy to de ne an operation ofconcatenation of strongly concatenable processes. We associate a source and atarget in S N to each strongly concatenable process by taking the string corre-sponding to the linear ordering of, respectively, min( ) and max( ). Then, theconcatenation of ( 0: 0 ! N; `0; L0): u ! v and ( 1: 1 ! N; `1; L1): v ! wis the strongly concatenable process u ! w obtained by merging the maximalplaces of 0 and the minimal of 1 according to L0 and `1. (See Figure 2, wherewe enrich the usual representation of non-sequential processes by labelling theminimal and the maximal places with the values of, respectively, ` and L.)Proposition 4.2Under the above de ned operation of sequential composition, the strongly con-catenable processes of N form a category CQ[N ] whose identities are those pro-cesses consisting only of places, which therefore are both minimal and maximal,and such that ` = L.Strongly concatenable processes admit a tensor product such that, givenSCP = ( 0: 0 ! N; `0; L0): u! v and SCP 0 = ( 1: 1 ! N; `1; L1): u0 ! v0,SCP SCP 0 is the strongly concatenable process ( : ! N; `; L): u u0! v v0given below (see also Figure 2), where +, besides the usual sum of naturalnumbers, denotes also the disjoint union of sets and functions, and in0 and in1the corresponding injections.= (@00 + @01 ; @10 + @11 :T 0 + T 1 ! (S 0 + S 1) );= 0 + 1;`(in0(a)) = `0(a) and `(in1(a)) = jmin( 0)j+ `1(a);L(in0(a)) = L0(a) and L(in1(a)) = jmax( 1)j+ L1(a).Observe that is a functor : CQ[N ] CQ[N ] ! CQ[N ]. The stronglyconcatenable processes consisting only of places are analogous in CQ[N ] of the397 aat0 t1aa1+*() 2+*()2+*() 1+*();aat0a1+*()21+*()2+*()=aat0 t1aat0a1+*() 2+*()1+*()+*()2+*() = at0a1+*()1+*()at1at0a1+*()+*()1+*()Figure 2: An example of the algebra of concatenable processesu1untv1vm1+*()777 n+*()7771+*() m+*()u1un v1vm1m+1+*() nm+n+*()n+11+*() n+mm+*()Figure 3: A transitions tu;v : u! v and the symmetry (u; v) in CQ[N ]permutations of Q[N ]. In particular, for any u; v 2 S , the strongly concaten-able process (u; v) consisting of places in one-to-one correspondence with theelements of the string u v mapped by to the corresponding places of N ,and such that `(ui) = i, `(vi) = juj+ i, L(ui) = jvj+ i and L(vi) = i, plays inCQ[N ] the role played by the permutation (u; v) in Q[N ] (see also Figure 3).Proposition 4.3Under the above de ned tensor product CQ[N ] is a symmetric strict monoidalcategory whose symmetry isomorphism is the family f (u; v)gu;v2S N .The transitions t of N are faithfully represented in the obvious way byprocesses with a unique transition which is in the post-set of any minimal placeand in the pre-set of any maximal place, minimal and maximal places being inone-to-one correspondence, respectively, with @0N (t) and @1N (t). Thus, varying` and L on the process corresponding to a transition we obtain a representativein CQ[N ] of each instance tu;v of t in Q[N ] (see also Figure 3).Theorem 4.4CQ[N ] and Q[N ] are isomorphic. 398 Proof. (Sketch.) Consider the following mapping F from the arrows of Q[N ] tostrongly concatenable processes.An instance tu;v of a transition t of Q[N ] is mapped to the strongly con-catenable processes with a unique transition and two layers of places: theminimal, in one-to-one correspondence with @0N (t) and ordered by ` to formthe string u, and the maximal, in one-to-one correspondence with @1N (t) andordered to form v.The permutation (u; v) is sent to the strongly concatenable process (u; v).F is extended inductively to a generic term of Q[N ], i.e., 01 is mappedto F( 0) F( 1) and 0 ; 1 to F( 0); F( 1).Then, de ning F to be the identity on the objects gives the required isomorphismF:Q[N ] = CQ[N ].XReferences[1] E. Best and R. Devillers. Sequential and Concurrent Behaviour in Petri Net Theory.Theoretical Computer Science, n. 55, pp. 87{136, 1987.[2] C. Brown, D. Gurr, and V. de Paiva. A Linear Speci cation Language for PetriNets. Technical Report DAIMI PB-363, Computer Science Dept., Aarhus University,1991.[3] P. Degano, J. Meseguer, and U. Montanari. Axiomatizing Net Computations andProcesses. In Proceedings of the 4th LICS Symposium, pp. 175{185, IEEE, 1989.[4] U. Goltz and W. Reisig. The Non-Sequential Behaviour of Petri Nets. Informationand Computation, n. 57, pp. 125{147, 1983.[5] S. MacLane. Categories for the Working Mathematician. Springer-Verlag, 1971.[6] J. Meseguer and U. Montanari. Petri Nets are Monoids. Information and Compu-tation, n. 88, pp. 105{154, Academic Press, 1990.[7] M. Nielsen, G. Plotkin, and G.Winskel. Petri Nets, Event Structures and Domains,Part 1. Theoretical Computer Science, n. 13, pp. 85{108, 1981.[8] C.A. Petri. Kommunikation mit Automaten. PhD thesis, Institut fur InstrumentelleMathematik, Bonn, Germany, 1962.[9] C.A. Petri. Non-Sequential Processes. Interner Bericht ISF{77{5, Gesellschaft furMathematik und Datenverarbeitung, Bonn, Germany, 1977.[10] W. Reisig. Petri Nets. Springer-Verlag, 1985.[11] V. Sassone. On the Semantics of Petri Nets: Processes, Unfoldings, and In nite Com-putations. PhD Thesis TD 6/94, Dipartimento di Informatica, Universit a di Pisa, 1994.[12] V. Sassone. Some Remarks on Concatenable Processes. Technical Report TR 6/94,Dipartimento di Informatica, Universit a di Pisa, 1994.[13] G. Winskel. A New De nition of Morphism on Petri Nets. In Proceedings of STACS `84,LNCS, n. 166, pp. 140{150, Springer-Verlag, 1984.[14] G. Winskel. Petri Nets, Algebras, Morphisms and Compositionality. Information andComputation, n. 72, pp. 197{238, 1987.399 Functional Logic Programming in GCLAOlof TorgerssonDepartment of Computing Science, Chalmers University of TechnologyS-412 96 Goteborg,[email protected] describe a de nitional approach to functional logic programming,based on the theory of Partial Inductive De nitions and the Program-ming Language GCLA. It is shown how functional and logic programmingare easily integrated in GCLA using the features of the language, thatis combining functions and predicates in programs becomes a matter ofprogramming methodology. We also give a brief description of a way toautomatically generate e cient procedural parts to the described de ni-tions.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Proceedings of the 16 th Nordic Workshop on Programming Theory October 6 – 8 , 2004 Uppsala , Sweden

ion Based Analysis and Arbiter Synthesis: Radar Memory Interface Card Case Study Revisited Juhan Ernits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

متن کامل

24 th Nordic Workshop on Programming Theory

REPORT NO 403 October 2012 B RGENS IS U NI RTAS Department of Informatics UNIVERSITY OF BERGEN Bergen, Norway This report has URL http://www.ii.uib.no/publikasjoner/texrap/pdf/2012-403.pdf Reports in Informatics from Department of Informatics, University of Bergen, Norway, is available at http://www.ii.uib.no/publikasjoner/texrap/. Requests for paper copies of this report can be sent to: Depart...

متن کامل

COHORT PROFILE Cohort Profile: The Nordic Perinatal Bereavement Cohort

Department of Epidemiology, School of Public Health, Aarhus University, Denmark, Research Unit for General Practice, Aarhus University, Denmark, Department of General Practice, School of Public Health, Aarhus University, Denmark, Department of Medicine, Clinical Epidemiology Unit, Karolinska Institutet, Sweden, National Institute for Health and Welfare, Finland and Nordic School of Public Healt...

متن کامل

Interaction of different irrigation strategies and soil textures on the nitrogen uptake of field grown potatoes

Nitrogen (N) uptake (kg ha-1) of field-grown potatoes was measured in 4.32 m2 lysimeters that were filled with coarse sand, loamy sand, and sandy loam and subjected to full (FI), deficit (DI), and partial root-zone drying (PRD) irrigation strategies. PRD and DI as water-saving irrigation treatments received 65% of FI after tuber bulking and lasted for six weeks until final harvest. Results show...

متن کامل

An Operational Foundation for Delimited Continuations

We derive an abstract machine that corresponds to a definitional interpreter for the control operators shift and reset. Based on this abstract machine, we construct a syntactic theory of delimited continuations. Both the derivation and the construction scale to the family of control operators shiftn and resetn. The definitional interpreter for shiftn and resetn has n + 1 layers of continuations...

متن کامل

Rapid cold hardening improves recovery of ion homeostasis 1 and chill coma recovery in the migratory locust Locusta

Anders Findsen, Jonas Lembcke Andersen, Sofia Calderon, Johannes Overgaard 5 Zoophysiology, Department of Biosciences, Aarhus University, C.F. Møllers Alle 3, Building 1131, 6 DK-8000 Aarhus, Denmark 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Author of correspondence 21 Anders Findsen 22 Zoophysiology, Department of Biosciences 23 Aarhus University 24 C.F. Møllers Alle 3, Building 1131 25 8000 Aarh...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994