Top Up Prev Next Bottom Contents Index Search

13.7 Demos

There are four demos in the CG domain, shown in figure 13-3; these are explained below.

pipeline This demo demonstrates a technique for generation of pipelined schedules with Ptolemy's parallel schedulers, even though Ptolemy's parallel schedulers attempt to minimize makespan (the time to compute one iteration of the schedule) rather than maximize the throughput (the time for each iteration in the execution of a very large number of iterations). To retime a graph, we simply add delays on all feedforward arcs (arcs that are not part of feedback loops). We must not add delays in feedback loops as that will change the semantics. The effect of the added delays is to cause the generation of a pipelined schedule. The delays marked as "(conditional)" in the demo are parameterized delays; the delay value is zero if the universe parameter retime is set to NO, and is 100 if the universe parameter is set to YES. The delay in the feedback loop is always one. Schedules are generated in either case for a three-processor system with no communication costs. If this were a real-life example, the programmer would next attempt to reduce the "100" values to the minimum values that enable the retimed schedule to run; there are other constraints that apply as well when there are parallel paths, so that corresponding tokens arrive at the same star. If the system will function correctly with zero values for initial values at points where the retiming delays are added, the generated schedule can be used directly. Otherwise, a preamble, or partial schedule, can be prepended to provide initial values.
schedTest This is a simple multiprocessor code generation demo. By changing the parameters in the RateChange star, you can make the demo more interesting by observing how the scheduler manages to parallelize multiple invocations of a star.
Sih-4-1 This demo allows the properties of the parallel scheduler to be investigated, by providing a universe in which the run times of stars, the number of processors, and the communication cost between processors can be varied. The problem, as presented by the default parameters, is to schedule a collection of dataflow actors on three processors with a shared bus connecting them. Executing the demo causes a Gantt chart display to appear, showing the partitioning of the actors onto the three processors. Clicking the left mouse button at various points in the schedule causes the associated stars to be highlighted in the universe palette. After exiting from the Gantt chart display, code is written to a separate file for each processor (here the "code" is simply a sequence of comments written by the dummy CG stars). It is interesting to explore the effects of varying the communication costs, the number of processors, and the communication topology. To do so, execute the edit-target command (type 'T'). A display of possible targets comes up. Of the available options, only SharedBus and FullyConnected will use the parallel scheduler, so select one of them and click on "Ok". Next, a display of target parameters will appear. The interesting ones to vary are nprocs, the number of processors, and sendTime, the communication cost. Try using two or four processors, for example. Sometimes you will find that the scheduler will not use all the processors. For example, if you make the communication cost very large, everything will be placed on one processor. If the communication cost is 1 (the default), and four processors are provided, only three will be used.
useless This is a simple demo of the dummy stars provided in the CG domain. Each star, when executed, adds code to the target. On completion of execution for two iterations, the accumulated code is displayed in a popup window, showing the sequence of code produced by the three stars.


Top Up Prev Next Bottom Contents Index Search

Copyright © 1990-1997, University of California. All rights reserved.