Page 17 out of 24 total pages


12 DE Domain

Authors: Lukito Muliadi
Edward A. Lee

12.1 Introduction

The discrete-event (DE) domain supports time-oriented models of systems such as queueing systems, communication networks, and digital hardware. In this domain, actors communicate by sending events, where an event is a data value (a token) and a time stamp. A DE scheduler ensures that events are processed chronologically according to this time stamp by firing those actors whose available input events are the oldest (having the earliest time stamp of all pending events).

A key strength in our implementation is that simultaneous events (those with identical time stamps) are handled systematically and deterministically. A second key strength is that the global event queue uses an efficient structure that minimizes the overhead associated with maintaining a sorted list with a large number of events.

12.1.1 Model Time

In the DE model of computation, time is global, in the sense that all actors share the same global time. The current time of the model is often called the model time or simulation time to avoid confusion with current real time.

As in most Ptolemy II domains, actors communicate by sending tokens through ports. Ports can be input ports, output ports, or both. Tokens are sent by an output port and received by all input ports connected to the output port through relations. When a token is sent from an output port, it is packaged as an event and stored in a global event queue. By default, the time stamp of an output is the model time, although specialized DE actors can produce events with future time stamps.

Actors may also request that they be fired at some time in the future by calling the fireAt() method of the director. This places a pure event (one with a time stamp, but no data) on the event queue. A pure event can be thought of as setting an alarm clock to be awakened in the future. Sources (actors with no inputs) are thus able to be fired despite having no inputs to trigger a firing. Moreover, actors that introduce delay (outputs have larger time stamps than the inputs) can use this mechanism to schedule a firing in the future to produce an output.

In the global event queue, events are sorted based on their time stamps and their destination ports (or actors, in the case of pure events). An event is removed from the global event queue when the model time reaches its time stamp, and if it has a data token, then that token is put into the destination input port.

At any point in the execution of a model, the events stored in the global event queue have time stamps greater than or equal to the model time. The DE director is responsible for advancing (i.e. incrementing) the model time when all events with time stamps equal to the current model time have been processed (i.e. the global event queue only contains events with time stamps strictly greater than the current time). The current time is advanced to the smallest time stamp of all events in the global event queue. This advancement marks the beginning of an iteration.

12.1.2 Iteration

At each iteration, after advancing the current time, the director chooses some events in the global event queue based on their time stamps and destination actors. The DE director chooses events according to these rules:

12.1.3 Getting a Model Started

Before one of the iterations described above can be run, there have to be initial events in the global event queue. Actors may produce initial pure events in their initialize() method. Normally, they cannot produce events with data in their initialize() method because type resolution has not been done, so the types of the ports are not set. Thus, to get a model started, at least one actor must be used that produces such pure events. All the domain-polymorphic timed sources described in the Actor Libraries chapter produce such events. We can define the start time to be the smallest time stamp of these initial events.

12.1.4 Stopping Execution

Execution stops when one of these conditions become true:

12.2 Overview of The Software Architecture

The UML static structure diagram for the DE kernel package is shown in figure 12.1. For model builders, the important classes are DEDirector, DEActor and DEIOPort. At the heart of DEDirector is a global event queue that sorts events according to their time stamps and priorities.

The DEDirector uses an efficient implementation of the global event queue, a calendar queue data structure [11]. The time complexity for this particular implementation is O(1) in both enqueue and dequeue operations. This means that the time complexity for enqueue and dequeue operations is independent of the number of pending events in the global event queue. For extensibility, different implementations of the global event queue can be realized by implementing the DEEventQueue interface and specifying the event queue using the appropriate constructor for DEDirector.

The DEActor class provides convenient methods to access time, since time is an essential part of a timed domain like DE. Nonetheless, actors in a DE model are not required to be derived from the DEActor class. Simply deriving from TypedAtomicActor gives you the same capability, but without the convenience. In the latter case, time is accessible through the director.

The DEIOPort class is be used by actors that are specialized to the DE domain. It supports annotations that inform the scheduler about delays through the actor. It also provides two additional methods, overloaded versions of broadcast() and send(). The overloaded versions have a second argument for the time delay, allowing actors to send output data with a time delay (relative to current time).

Domain polymorphic actors, such as those described in the Actor Libraries chapter, have as ports instances of TypedIOPort, not DEIOPort, and therefore cannot produce events in the future directly by sending it through output ports. Note that tokens sent through TypedIOPort are treated as if they were sent through DEIOPort with the time delay argument equal to zero. Domain polymorphic actors can produce events in the future indirectly by using the fireAt() method of the director. By calling fireAt(), the actor requests a refiring in the future. The actor can then produce a delayed event during the refiring.

12.3 The DE Actor Library

The DE domain has a small library of actors in the ptolemy.domains.de.lib package, shown in figure 12.2. These actors are particularly characterized by implementing both the TimedActor and SequenceActor interfaces. These actors use the current model time, and in addition, assume they are dealing with sequences of discrete events. Some of them use domain-specific infrastructure, such as the convenience class DEActor and the base class DETransformer. The DETransformer class provides in input and output port that are instances of DEIOPort. The Delay and Server actors use facilities of these ports to influence the firing priorities.

12.4 Mutations

The DE director tolerates changes to the model during execution. The change should be queued with the director or manager using requestChange(). While invoking those changes, the method invalidateSchedule() is expected to be called, notifying the director that the topology it used to calculate the priorities of the actors is no longer valid. This will result in the priorities being recalculated the next time prefire() is invoked.

However, there is one subtlety. If an actor produces events in the future via DEIOPort, then the destination actor will be fired even if it has been removed from the topology by the time the execution reaches that future time. This may not always be the expected behavior. The Delay actor in the DE library behaves this way.

12.5 Writing DE Actors

It is very common in DE modeling to include custom-built actors. No pre-defined actor library seems to prove sufficient for all applications. For the most part, writing actors for the DE domain is no different than writing actors for any other domain. Some actors, however, need to exercise particular control over time stamps and actor priorities. Such actors use instances of DEIOPort rather than TypedIOPort. The first section below gives general guidelines for writing DE actors and domain-polymorphic actors that work in DE. The second section explains in detail the priorities, in preparation for the following section, which gives an example. The final section discusses actors that operate as a Java thread.

12.5.1 General Guidelines

The points to keep in mind are:


private int _count;

 
create a shadow variable

private int _countShadow;

 
Then write the methods as follows:

public void fire() {
_countShadow = _count;
... perform some computation that may modify _countShadow ...
}
public boolean postfire() {
_count = _shadowCount;
return super.postfire();
}
This ensures that the state is updated only in postfire().
In a similar fashion, delayed outputs (produced by either mechanism) should be produced only in the postfire() method, since a delayed outputs are persistent state. Thus, fireAt() should be called in postfire() only, as should the overloaded send() and broadcast() of DEIOPort.

12.5.2 Simultaneous Events

An important aspect of a DE domain is the prioritizing of simultaneous events. This gives the domain a dataflow-like behavior for events with identical time stamps. It is done by assigning ranks to actors. The ranks are drawn from the set of non-negative integers. They are uniquely assigned; i.e. no two distinct actors are assigned the same rank. Simultaneous events with highest priority are those destined to actors with the lowest ranks. The ranks are determined by a topological sort of a directed acyclic graph (DAG) of the actors.

The DAG of actors follows the topology of the graph, except when there are declared delays. Consider the simple topology shown in figure 12.3. Assume that actor Y is a zero-delay actor, meaning that its output events have the same time stamp as the input events (this is suggested by the dashed arrow). Suppose that actor X produces an event with time stamp . That event is available at ports B and D, so the scheduler could choose to fire actors Y or Z. Which should it fire? Intuition tells us it should fire the upstream one first, Y, because that firing may produce another event with time stamp at port D (which is presumably a multiport). It seems logical that if actor Z is going to get one event on each input channel with the same time stamp, then it should see those events in the same firing. If there are simultaneous events at B and D, then the one at B should have higher priority.

Once the DAG is constructed, it is sorted topologically. This simply means that an ordering of actors is assigned such that an upstream actor in the DAG is earlier in the ordering than a downstream actor. This ordering is not unique, meaning that the priorities assigned to actors are somewhat arbitrary. As long as actors are communicating only via events, however, then these arbitrary choices will have no impact on the end result of executing the model. We say that the execution is deterministic.

There are situations where constructing a DAG following the topology is not possible. Consider the topology shown in figure 12.4. It is evident from the figure that the topology is not acyclic. Indeed, figure 12.4 depicts a zero-delay directed loop where topological sort cannot be done. The director will refuse to run the model, and will terminate with an error message. This is called a zero-delay loop.

The Delay actor in DE is a domain-specific actor that asserts a delay relationship between its input and output. Thus, if we insert a Delay actor in the loop, as shown in figure 12.5, then constructing the DAG becomes once again possible. The Delay actor breaks the precedences.

Note in particular that the Delay actor breaks the precedences even if its delay parameter is set to zero. Thus, the DE domain is perfectly capable of modeling zero-delay loops, but the model builder has to specify the order in which events should be processed by placing a Delay actor with a zero value for its parameter.

12.5.3 Examples

Simplified Delay Actor

An example of a domain-specific actor for DE is shown in figure 12.6. This actor delays input events by some amount specified by a parameter. The domain-specific features of the actor are shown in bold. They are:


input.delayTo(output);

 
This statement declares to the director that this actor implements a delay from input to output. The actor uses this to break the precedences when constructing the DAG to find priorities.

Server Actor

The Server actor in the DE library (see figure 12.2) uses a rich set of behavioral properties of the DE domain. A server is a process that takes some amount of time to serve "customers." While it is serving a customer, other arriving customers have to wait. This actor can have a fixed service time (set via the parameter serviceTime, or a variable service time, provided via the input port newServiceTime). A typical use would be to supply random numbers to the newServiceTime port to generate random service times. These times can be provided at the same time as arriving customers to get an effect where each customer experiences a different, randomly selected service time.

The (compacted) code is shown in figure 12.7. This actor extends DETransformer, which has two public members, input and output, both instances of DEIOPort. The constructor makes use of the delayTo() method of these ports to indicate that the actor introduces delay between its inputs and its output.

The actor keeps track of the time at which it will next be free in the private variable _nextTimeFree. This is initialized to minus infinity to indicate that whenever the model begins executing, the server is free. The prefire() method determines whether the server is free by comparing this private variable against the current model time. If it is free, then this method returns true, indicating to the scheduler that it can proceed with firing the actor. If the server is not free, then the prefire() method checks to see whether there is a pending input, and if there is, requests a firing when the actor will become free. It then returns false, indicating to the scheduler that it does not wish to be fired at this time. Note that the prefire() method uses the methods getCurrentTime() and fireAt() of DEActor, which are simply convenient interfaces to methods of the same name in the director.

The fire() method is invoked only if the server is free. It first checks to see whether the newServiceTime port is connected to anything, and if it is, whether it has a token. If it does, the token is read and used to update the serviceTime parameter. No more than one token is read, even if there are more in the input port, in case one token is being provided per pending customer.

The fire() method then continues by reading an input token, if there is one, and updating _nextTimeFree. The input token that is read is stored temporarily in the private variable _currentInput. The postfire() method then produces this token on the output port, with an appropriate delay. This is done in the postfire() method rather than the fire() method in keeping with the policy in Ptolemy II that persistent state is not updated in the fire() method. Since the output is produced with a future time stamp, then it is persistent state.

Note that when the actor will not get input tokens that are available in the fire() method, it is essential that prefire() return false. Otherwise, the DE scheduler will keep firing the actor until the inputs are all consumed, which will never happen if the actor is not consuming inputs!

Like the SimpleDelay actor in figure 12.6, this one produces outputs with future time stamps, using the overloaded send() method of DEIOPort that takes a delay argument. There is a subtlety associated with this design. If the model mutates during execution, and the Server actor is deleted, it cannot retract events that it has already sent to the output. Those events will be seen by the destination actor, even if by that time neither the server nor the destination are in the topology! This could lead to some unexpected results, but hopefully, if the destination actor is no longer connected to anything, then it will not do much with the token.

12.5.4 Thread Actors

In some cases, it is useful to describe an actor as a thread that waits for input tokens on its input ports. The thread suspends while waiting for input tokens and is resumed when some or all of its input ports have input tokens. While this description is functionally equivalent to the standard description explained above, it leverages on the Java multi-threading infrastructure to save the state information.

Consider the code for the ABRecognizer actor shown in figure 12.8. The two code listings implement two actors with equivalent behavior. The left one implements it as a threaded actor, while the right one implements it as a standard actor. We will from now on refer to the left one as the threaded description and the right one as the standard description. In both description, the actor has two input ports, inportA and inportB, and one output port, outport. The behavior is as follows.

Produce an output event at outport as soon as events at inportA and inportB occurs in that particular order, and repeat this behavior.

Note that the standard description needs a state variable state, unlike the case in the threaded description. In general the threaded description encodes the state information in the position of the code, while the standard description encodes it explicitly using state variables. While it is true that the context switching overhead associated with multi-threading application reduces the performance, we argue that the simplicity and clarity of writing actors in the threaded fashion is well worth the cost in some applications.

The infrastructure for this feature is shown in figure 12.1. To write an actor in the threaded fashion, one simply derives from the DEThreadActor class and implements the run() method. In many cases, the content of the run() method is enclosed in the infinite `while(true)' loop since many useful threaded actors do not terminate.

The waitForNewInputs() method is overloaded and has two flavors, one that takes no arguments and another that takes an IOPort array as argument. The first suspends the thread until there is at least one input token in at least one of the input ports, while the second suspends until there is at least one input token in any one of the specified input ports, ignoring all other tokens.

In the current implementation, both versions of waitForNewInputs() clear all input ports before the thread suspends. This guarantees that when the thread resumes, all tokens available are new, in the sense that they were not available before the waitForNewInput() method call.

The implementation also guarantees that between calls to the waitForNewInputs() method, the rest of the DE model is suspended. This is equivalent to saying that the section of code between calls to the waitForNewInput() method is a critical section. One immediate implication is that the result of the method calls that check the configuration of the model (e.g. hasToken() to check the receiver) will not be invalidated during execution in the critical section. It also means that this should not be viewed as a way to get parallel execution in DE. For that, consider the DDE domain.

It is important to note that the implementation serializes the execution of threads, meaning that at any given time there is only one thread running. When a threaded actor is running (i.e. executing inside its run() method), all other threaded actors and the director are suspended. It will keep running until a waitForNewInputs() statement is reached, where the flow of execution will be transferred back to the director. Note that the director thread executes all non-threaded actors. This serialization is needed because the DE domain has a notion of global time, which makes parallelism much more difficult to achieve.

The serialization is accomplished by the use of monitor in the DEThreadActor class. Basically, the fire() method of the DEThreadActor class suspends the calling thread (i.e. the director thread) until the threaded actor suspends itself (by calling waitForNewInputs()). One key point of this implementation is that the threaded actors appear just like an ordinary DE actor to the DE director. The DEThreadActor base class encapsulates the threaded execution and provides the regular interfaces to the DE director. Therefore the threaded description can be used whenever an ordinary actor can, which is everywhere.

The code shown in figure 12.9 implements the run method of a slightly more elaborate actor with the following behavior:

Emit an output O as soon as two inputs A and B have occurred. Reset this behavior each time the input R occurs.

Future work in this area may involve extending the infrastructure to support various concurrency constructs, such as preemption, parallel execution, etc. It might also be interesting to explore new concurrency semantics similar to the threaded DE, but without the `forced' serialization.

12.6 Composing DE with Other Domains

One of the major concepts in Ptolemy II is modeling heterogeneous systems through the use of hierarchical heterogeneity. Actors on the same level of hierarchy obey the same set of semantics rules. Inside some of these actors may be another domain with a different model of computation. This mechanism is supported through the use of opaque composite actors. An example is shown in figure 12.10. The outermost domain is DE and it contains seven actors, two of them are opaque and composite. The opaque composite actors contain subsystems, which in this case are in the DE and CT domains.

12.6.1 DE inside Another Domain

The DE subsystem completes one iteration whenever the opaque composite actor is fired by the outer domain. One of the complications in mixing domains is in the synchronization of time. Denote the current time of the DE subsystem by tinner and the current time of the outer domain by touter. An iteration of the DE subsystem is similar to an iteration of a top-level DE model, except that prior to the iteration tokens are transferred from the ports of the opaque composite actors into the ports of the contained DE subsystem, and after the end of the iteration, the director requesting a refire at the smallest time stamp in the event queue of the DE subsystem.

The first of these is done in the transferInputs() method of the DE director. This method is extended from its default implementation in the Director class. The implementation in the DEDirector class advances the current time of the DE subsystem to the current time of the outer domain, then calls super.transferInputs(). It is done in order to correctly associate tokens seen at the input ports of the opaque composite actor, if any, with events at the current time of the outer domain, touter, and put these events into the global event queue. This mechanism is, in fact, how the DE subsystem synchronize its current time, tinner, with the current time of the outer domain, touter.(Recall that the DE director advances time by looking at the smallest time stamp in the event queue of the DE subsystem). Specifically, before the advancement of the current time of the DE subsystem tinner is less than or equal to the touter, and after the advancement tinner is equal to the touter.

Requesting a refiring is done in the postfire() method of the DE director by calling the fireAt() method of the executive director. Its purpose is to ensure that events in the DE subsystem are processed on time with respect to the current time of the outer domain, touter.

Note that if the DE subsystem is fired due to the outer domain processing a refire request, then there may not be any tokens in the input port of the opaque composite actor at the beginning of the DE subsystem iteration. In that case, no new events with time stamps equal to touter will be put into the global event queue. Interestingly, in this case, the time synchronization will still work because tinner will be advanced to the smallest time stamp in the global event queue which, in turn, has to be equal touter because we always request a refire according to that time stamp.

12.6.2 Another Domain inside DE

Due to its nature, the opaque composite actor is opaque and therefore, as far as the DE Director is concerned, behaves exactly like a domain polymorphic actor. Recall that domain polymorphic actors are treated as functions with zero delay in computation time. To produce events in the future, domain polymorphic actors request a refire from the DE director and then produce the events when it is refired.




Page 17 out of 24 total pages


ptII at eecs berkeley edu Copyright © 1998-1999, The Regents of the University of California. All rights reserved.