1995 Annual RASSP Report

September 15, 1994 through September 15, 1995

SYSTEM-LEVEL DESIGN METHODOLOGY FOR EMBEDDED SIGNAL PROCESSORS

Contract Number: F33615-93-C-1317
Principal Investigator: Edward A. Lee
Organization: University of California at Berkeley


Contents:

  1. Overview
  2. Details of Accomplishments
  3. Events
  4. Next Period Activities
  5. Acknowledgments
  6. Papers and Patents
  7. Presentations


Project Participants at Berkeley:


1. Overview

The focus of this project is on design methodology for complex real-time systems where a variety of design methodologies and implementation technologies must be combined. Design methodologies are encapsulated in one or more models of computation, while implementation technologies are implemented as synthesis tools. Applications that use more than one model of computation and/or more than one synthesis tool are said to be heterogeneous. Hardware/software codesign is one example of heterogeneous design.

The project aims to develop formal models for such heterogeneous systems, a software environment for the design of such systems, and synthesis technologies for implementation of such systems. In the latter category, we are concentrating on problems not already addressed well elsewhere, such as the synthesis of embedded software and the partitioning and scheduling of heterogeneous parallel systems.

We have made progress on several different models of computation and their interaction. In dataflow modeling, we have formalized relationships among actors using partial orders and characterized the various types of dataflow in terms of their mathematical properties. We have also developed a new efficient robust scheduler for dynamic dataflow and investigated efficient dataflow representations for multidimensional multirate systems. The Process Networks model is a superset of the dataflow models and schedules blocks as processes (threads) under the control of a multitasking kernel. In other areas of modeling, we have researched formal models for characterizing hierarchical finite state machine (FSM) controllers. For interaction between models of computation, we have studied the mixing of FSM controllers with dataflow models, FSM controllers with discrete-event models, and dataflow models with discrete-event models. Another model of computation called the Integrated Processing and Understanding of Signals, developed by the Boston University RASSP Project, is knowledge-based architecture for controlling dataflow models, e.g. for signal reprocessing. We are also beginning to study how to integrate models of computation for analog circuits with discrete models of computation.

In implementation, we have made fundamental contributions in optimized synthesis of embedded software on both uniprocessor and multiprocessor architectures. For uniprocessor code generation, we can trade off program size, data size, and throughput. For parallel code generation, we have made progress in optimizing synchronized communication between processors. We have also made contributions in hierarchical scheduling and incremental compilation and developed methodologies for converting (untimed) models of computation into (timed) clocked circuits implemented in VHDL.

In hardware/software codesign, we have derived an efficient algorithm to partition dataflow graphs into a combined hardware/software implementation. We have also made a breakthrough in design methodology management by defining an abstraction for capturing the dependencies between the data, design representations, and tools used in a complex design process. Having a formal design model allows heterogeneous design styles to cooperate in a constructive and predictable manner.

We have successfully tested many theoretical ideas in modeling, implementation, system design, and user interfaces in the Ptolemy software environment. We have distributed two new releases of the software, which runs on ten different Unix architectures. These releases contain many improvements to the Ptolemy interactive graphical interface (pigi), including highly interactive, animated simulation capabilities. Pigi now supports a visual syntax for representing recursion and scalable systems, based on higher-order functions. During simulation, Ptolemy can now cooperate with MATLAB. In synthesis, Ptolemy can get feedback about hardware implementation cost from the high-level synthesis tools Hyper for Silage specifications and Synopsys tools for VHDL specifications.

Since the last release of Ptolemy, we have continued to develop its abilities. During simulation, Ptolemy can cooperate with Esterel. New computational models (domains) for simulation have been developed for Process Networks and Integrated Processing and Understanding of Signals. In hardware/software synthesis and system design, we have developed new targets, a new VHDL domain, and a design methodology management domain. One of the new targets generates C code and compiles it for distributed simulation on a Network of Workstations. The new VHDL domain, which will replace the behavioral and functional VHDL domains, generates a variety of styles of VHDL code. For the VHDL and other code generation domains, we have devised a mechanism for systematically generalizing the Ptolemy wormhole mechanism for passing data and control between any of the supported hardware and software implementation technologies. The Design Methodology Management Domain captures dependencies between tools, data, and design representations in a design process. In user interfaces, we have begun the development of an object-oriented graphical user interface called Tycho that supports the integration of special-purpose design editors, just as Ptolemy supports the integration of special-purpose design tools for simulation and implementation.

We have increased the visibility of the Ptolemy Project and Ptolemy software environment. We have made a wealth of information available on our World Wide Web server http://ptolemy.eecs.berkeley.edu, and on our FTP site ptolemy.eecs.berkeley.edu, including on-line demonstrations and searchable hypertext versions of all four volumes of documentation for the software environment. We initiated a new Usenet news group called comp.soft-sys.ptolemy. We gave a Ptolemy Mini-conference at U.C. Berkeley in March and a Ptolemy Tutorial in Washington, D.C., in July. We have placed the transparencies from both all-day events on our World Wide Web and FTP sites.


2. Details of Accomplishments

Dataflow Modeling

A dynamic dataflow scheduler should satisfy two requirements: The latter is particularly important for embedded systems.

In general, given a dataflow graph, it is undecidable whether the graph will deadlock (the halting problem). It is also undecidable whether the graph can be executed in bounded memory (Joe Buck showed in his PhD thesis how to convert this problem to the halting problem). It is easy to define a scheduling algorithm that satisfies R1 or R2, but no scheduling algorithm can always, in finite time, guarantee both R1 and R2. This problem has plagued us for years, and has also appeared in much of the dataflow architecture work.

In addition, the notion of an iteration in dataflow and process networks domains has risen to the fore as a critical (and difficult) theoretical issue. An unambiguous definition of an iteration is necessary for control of a simulation, but even more importantly, for interaction between heterogeneous models of computation. The so-called "synchronous" methods, for example, (like statecharts and Esterel) cannot be mixed (in a determinate way) with dataflow without an unambiguous definition of an iteration. An iteration is easy to define for the synchronous dataflow (SDF) model of computation, but for dynamic dataflow and process network models, the equivalent definition fails in some cases. In particular, an iteration in SDF is a sequence of firings that returns the buffers in a dataflow graph to their original state. It is undecidable whether such an iteration exists in a dynamic dataflow or process network model. Thus, our third condition is:

We have defined a robust and simple scheduler that can be used in the DDF and Process Network domains in Ptolemy, as well as in commercial simulators like COSSAP (from Synopsys), DDSIM (from Mentor Graphics), and SPW (from Cadence). It provably satisfies all three conditions.

Often, the notion of a step as defined by the scheduler is not always the notion that the user wants to see. We define an "iteration" to be one or more steps, where the number of steps is controlled by the user. To permit a user to annotate a dataflow graph with the number of firings of a block that constitute an "iteration," we implemented an extension to the GUI and the Target object to support "pragmas" attached to blocks. A given Target (such as the DDFTarget) understands only certain pragmas. In the DDF domain, the DDFTarget understands a pragma called "firingsPerIteration". Thus, when a user specifies a value of this pragma for a particular block, an "iteration" has been defined. if no such value is specified, then a "iteration" equals a "step," the scheduler default.

In [8], we review the beginning of a mathematical theory of dataflow based on partial orders, and connect this theory to the functional languages and dataflow architectures communities. This work has opened a number of issues that we are now investigating in more depth. A central idea is that a dataflow processes consists of repeated applications of dataflow firings, and that this can be described by the higher-order function F = map(f), where f is a function describing a single actor firing. The "map" higher-order function applies f to a stream input. This notation formalizes a number of concepts that have not been clear (at least not to us). We have determined, for example, that if "f" is "sequential" (in a very technical sense), then "F" is sequential. Sequentiality implies determinacy of a network of such functions. The next broader class of functions that we know of beyond the sequential functions, called "stable functions," also imply determinacy. However, we have found a counterexample where F is not stable even though f is. For this counterexample, F is not determinate. Thus, we believe that sequential functions characterize, in a very fundamental sense, those functions whose composition abstracts to a determinate function. The class of sequential functions, as it happens, is exactly the class implemented by the Ptolemy Dynamic Dataflow (DDF) domain. We have also implemented a superset of dataflow models in a Process Networks domain.

Based on the theory dataflow process networks [8], Tom Parks has implemented a Process Networks domain, using the gthreads package, a POSIX thread implementation from Florida State University that is distributed under the GNU General Library License. This gives us a head start on threaded computation under a standard that is expected to be supported by several workstation vendors, including Sun. Process networks are a generalization of dynamic dataflow.

In another dataflow advance, Praveen Murthy has generalized the multidimensional dataflow model of computation to support interpolation and decimation of images on non-rectangular lattices. One practical application is conversion to and from 2:1 interlaced video signals.

Embedded Sofware Synthesis

Shuvra Bhattacharyya (now at the the Semiconductor Research Laboratory of Hitachi America) and Praveen Murthy have developed a new technique for jointly minimizing the program and data memory space for embedded DSP applications that are specified by dataflow graphs. The technique gives top priority to minimizing program space, accomplishing this by constructing a so-called single-appearance schedule, in which each actor in the dataflow graph has exactly one lexical appearance in the schedule. This allows in-line code (which is maximally fast) to also be maximally compact.

Two heuristic methods have been developed for constructing the single-appearance schedules for acyclic dataflow graphs:

These two method appear to be complementary, in that for graphs in which one fails to find a good schedule, the other finds one. The PGAN method, which was previously developed by Shuvra, has been refined so that for multirate dataflow graphs without feedback loops it is much more efficient than before. Moreover, it has been shown to be optimal under certain commonly valid assumptions.

After constructing a single-appearance schedule, a dynamic programming algorithm is applied as a post-optimization step to re-parenthesize the schedule to minimize the data memory usage. This post-optimization is optimal for graphs without delays. For other graphs, the technique is guaranteed to compute a schedule whose data memory cost is no greater than that of an optimally re-parenthesized schedule; in some cases, the technique is able to modify the lexical ordering of the given single appearance schedule, and parenthesize the modified ordering in such a way that the data memory cost of the result is less than that of any parenthesization of the original lexical ordering.

The PGAN and RPMC heuristics were run on a set of dataflow graphs representing practical multirate signal processing algorithms and compared against randomly generated (valid) schedules for these graphs. They were also run on randomly generated graphs, and compared against randomly generated schedules for these graphs. The results argue very strongly for the heuristics, with PGAN performing better for graphs with more regularly structured rate changes, and RPMC performing better for graphs with less regularly structured rate changes. Two reports [18,20] and a short summary [5] exist for this work.

Shuvra and Praveen have also extended this method to handle delays and arbitrary topologies, and show that while the original algorithm always preserves the lexical ordering of the input schedule, the generalization may change the lexical ordering (for a graph that has delays) [33]. Thus, it may compute a schedule that has lower buffer memory requirement than all single appearance schedules that have the same lexical ordering as the input schedule. Experimental results are given in [33].

In addition, an adaptation of the dynamic programming algorithm is given that optimally solves the converse problem of minimizing code size by optimally organizing loops for an arbitrary (not necessarily single appearance) schedule. Thus, for example, a schedule that has been constructed for minimum buffer memory can be post-optimized to improve the code size, while preserving the buffer optimality.

System-Level Design

Asawaree Kalavade has developed a methodology management (DMM) domain in Ptolemy that captures the design flow as an integral part of a design. As an illustrative example, she has constructed a demonstration of a design flow that consists of a parallelizing code generator that iterates the design, increasing the number of processors until a specified throughput constraint is met. Once the iteration has converged, parallel code is synthesized, as is a description of the parallel architecture. The parallel code can then be simulated executing on the synthesized architecture. The components in the design include a module for estimating the required number of processors, a parallel scheduling module, a parallel code generator, an architecture generator, and hardware simulation module. The entire design flow is represented graphically.

Asawaree Kalavade has also developed a sophisticated hardware/software partitioning algorithm. This algorithm supports selection from among multiple implementations within the hardware or software categories. The area of a node implemented in hardware depends on the time allocated to run it. In our earlier partitioning work we assumed the hardware to be executed in the critical time (i.e., best case, corresponding to the largest area) and made a binary choice for each node, choosing either hardware or software. However, the extension selects the appropriate implementation for a node, given its area-time curve, rather than just deciding whether it is in hardware of software. Asawaree has developed an algorithm that uses the earlier partitioning algorithm as the core. She has run some experiments using this, with impressive results. The algorithm has complexity O(n^3), where n is the number of nodes. For an eight-node example, the optimal solution using integer linear programming required 3.5 hours. Asawaree's algorithm got close to this optimal solution and completed in 3 minutes. This work is reported in [6].

Heterogeneous Simulation

Prof. Soonhoi Ha of Seoul National University helped us to make a set of changes to improve the synchronous dataflow and discrete-event interaction. These changes allow us to build arbitrarily deeply nested mixed systems while maintaining a consistent and intuitive notion of global time. This is challenging because the synchronous dataflow (SDF) domain has no notion of time. The model we are following is that the dataflow domains appear to any timed domain to fire "instantaneously". That is, they produce outputs with the same time stamps as the inputs. If they are multirate systems, then they may optionally also produce additional events with time stamps in the future, under the control of a target parameter. The changes that were required included modifications to the DE schedulers to prevent them from advancing their notion of time beyond their requested stopping time. In addition, the SDF wormhole object had to explicitly handle time stamps in order to define its multirate behavior. We have built a number of demonstration systems that illustrate this interaction.

Mike Williamson has re-implemented the VHDL code generation domain in Ptolemy. The domain works differently from the domains in the latest release. It is more like the style of code generation used in the CGC domain. The idea is to determine the style of VHDL that is generated in the Target, rather than in the libraries. Thus, the same libraries and the same block diagrams can be used to generate a plurality of VHDL styles. The default target generates code for non-homogeneous SDF systems as sequential statements in a single VHDL process. An alternative target keeps track of firings and data dependencies and then constructs VHDL entities for each firing and connects them together according to their data dependencies (including states). Both use the same blocks, but produce very different styles of code. We are working on building a block library for these new versions of the domains, which have a different syntax and use macros extensively.

Mike has also experimented with passing results from the structural code generation target (the alternative target) to Synopsys for basic synthesis. This does not, at this time, take advantage of any transformations that are possible because of the dataflow model, such as re-timing, pipelining, scheduling and allocation (in the sense of re-use of execution units). Rather, the software will currently synthesize a straight data path with no internal clocking or feedback loops, just inputs, structure, and outputs.

Heterogeneous Implementation

José Luis Pino has made of progress on hierarchical scheduling, in which a dataflow graph is clustered, and schedulers are separately invoked on the clusters. Specialized schedulers can be combined with more general-purpose schedulers for improved overall performance.

José has demonstrated this hierarchical scheduling on a heterogeneous platform consisting of a Sun workstation running Solaris 2.4 and a programmable DSP on an S-bus card. His demonstrations incrementally compile real-time subsystems for the DSP and embed them within a non-real-time process running on the Unix workstation. Communication between them is asynchronous, using what José calls a "peek/poke" mechanism to asynchronously read and write into the DSP memory. His demonstration systems are acoustic modems (modems that transmit from an audio loudspeaker to an audio microphone through air). Animated, interactive signal displays are produced on the workstation, enabling better evaluation and understanding of the algorithms and their performance.

The hierarchical scheduling mechanism permits the use of highly optimized loop scheduling techniques developed in our group by Shuvra Bhattacharyya. Without hierarchical scheduling, it was not possible to use these because they had not been designed for use in parallel systems. Because the applications are multirate, without hierarchical scheduling they required considerably more memory than was available on the DSP card. Moreover, scheduling time was substantial (because a rather large precedence graph was constructed). Thus, José has demonstrated that hierarchical scheduling enables modular use of scheduling optimizations, and has shown that in practical examples, considerable savings in embedded system memory are achieved. The publications [9,34] detail the hierarchical scheduling mechanism.

José has also proposed and prototyped an elegant and simple architecture for compiling subsystems in code generation domains and invoking them within simulation domains. There are a number of potential applications for this underlying infrastructure [38]:

A fundamental problem is that dataflow systems cannot always be incrementally compiled. The technical description of the problem is that they lack the composition property. Collections of dataflow actors in a domain do not necessarily have the same semantics as an individual actor. This problem is shared by many modern languages, including all synchronous languages, such as Esterel, Statecharts, Signal, and SPW. We are studying the fundamentals of this problem, and plan to proceed with an experimental setup that will allow us to evaluate the severity of the problem in practical circumstances.

Advances in Modeling

Higher-order functions allow compact scalable construction of large applications in an intuitively appealing and parameterized way. They also typically expose a great deal more parallelism than alternative methods, and allow the use of recursion without incurring run-time overhead. They are now transparently usable in all Ptolemy domains.

Joel King has begun work on a mixed-signal design environment into Ptolemy, which aims to combine Spice-level circuit modeling with all other levels of design embraced by Ptolemy.

We also continue to gain understanding of the use of control-oriented semantics mixed with dataflow. Our view is that while dataflow is well-suited to representing signal processing algorithms, there are much better techniques available for control. The most sophisticated of these fall into the class of "synchronous languages," which includes statecharts and their many variants, Esterel, and several other formalisms. Events in these so-called "synchronous languages" are totally ordered, in contrast to dataflow where they are partially ordered. We have focused on languages with the semantics of hierarchical state machines, like Esterel and statecharts, because these appear to be most natural for control and most different from dataflow.

Part of our current effort is the evaluation of existing formalisms. Statecharts have the advantage of being readily amenable to a visual syntax, as implemented for example in the iLogix Statemate system. But considerable disadvantages have been exposed in the research community, where more than 22 versions (each a different language) have been implemented. Each of these versions patches perceived disadvantages in the original semantics. Esterel (a control language with hierarchical finite-state machine semantics) by contrast appears to be on more solid footing, but has no visual syntax and has been perceived by some as difficult to learn. We are evaluating whether the introduction of a visual syntax will help.

We continue to make progress on a Ptolemy prototype that uses Esterel for controller design. Working with Frederick Boulanger of Supelec in France, we have incorporated a new back end to the Esterel compiler called occ++ that generates C++ object definitions rather than C procedures to implement the Esterel controller. This matches the requirements in Ptolemy much better, allowing for an arbitrary number of instances of each Esterel module, and allowing for natural parameterization of these modules. In the current prototype, Esterel modules are implemented only in Ptolemy stars (in the dataflow and discrete-event domains). VHDL is another possible back end for Esterel.

We have identified three orthogonal semantic properties of Statecharts: FSM, hierarchy, and concurrency. If we take away from Statecharts the transitions that cross hierarchy boundaries, we get a simpler model in which the FSM semantics can be cleanly separated from the concurrency semantics. This means that the basic FSM model can be mixed with the various Ptolemy domains' concurrency models to get many models that are only slightly weaker than Statecharts. They lack the hierarchy-crossing transitions, but those are considered by many to violate the information hiding principle of hierarchical design. We are refining this observation and testing it out on some examples.

The basic mechanism we use to nest controllers at all levels of a hierarchical design is a generalization of the wormhole mechanism in Ptolemy. Towards this end, we have constructed as a demonstration an SDF block that takes as parameters the names of two galaxies. It creates wormholes for these two galaxies and invokes them at runtime, arbitrarily switching between them. This is just a starting point for a block that could be used as a basis for Esterel, Tcl, or state machine controllers that invoke dataflow subsystems depending on whatever conditions they wish to depend on. This mechanism can be viewed both as a generalization of our current higher-order functions mechanism and as a generalization of our Wormhole concept. The implications are fairly profound: it means that one can write controllers in Ptolemy in arbitrary languages (C, C++, Esterel, Tcl, finite-state machines, etc.) that control the invocation of a plurality of subsystems. This promises to complete the work with mixing control systems into Ptolemy, since the control systems will not be restricted to being leaf cells in the hierarchy, as they are with the current Esterel implementation. They will be able to sit higher in the hierarchy, controlling entire subsystems written with foreign semantics. The current demonstration is very preliminary, but shows conclusively that this approach is feasible.

Wan-Teh Chang has put together a preliminary Tk-based tool for entering and editing state transition diagrams for designing control programs graphically. This rounds out our infrastructure for experimenting with controllers in Ptolemy, adding a useful alternative to Esterel that is closer to the statecharts approach.

A number of fundamental issues emerge when embedding controllers (sequential decision makers) into dataflow and discrete-event systems. These issues cut to the heart of the semantics of discrete-event systems in general. The problem fundamentally boils down to one of mixing subsystems with partially-ordered events with other subsystems that assume totally-ordered events. The issues are much more broadly applicable than we realized, applying even to the general mixing of our more general dataflow models (BDF and DDF) with our discrete-event model (DE). The explanation below is too curt to be comprehensive, but will hopefully give the flavor of the issues.

Demonstration systems have been constructed where modules written in Esterel are embedded within both discrete event (DE) and synchronous dataflow (SDF) systems. We have observed that while the use of SDF in this context may be adequate for hardware design, it has serious inefficiencies for embedded software design. Moreover, the problems are fundamental to the embedding of any technique where events are totally ordered (Esterel, statecharts, finite automata, etc.) within dataflow graphs, where events are only partially ordered.

The nature of the problem is as follows: to preserve determinacy, the dataflow model does not permit actors to test their input ports for the presence of a token, nor to take a branch depending on whether a token is present. However, a controller often wants to monitor a signal, say an exception signal, and branch in response to that signal. In the SDF embedding, that signal must always be present, using for example a Boolean FALSE to indicate that an exception has not occurred, and a Boolean TRUE to indicate that an exception has occurred. For circuit design, where this signal may represent a voltage on a wire, there is no inefficiency implied here. For software, however, the production and consumption of a large number of FALSE tokens that indicate that nothing interesting is happening can be quite costly.

We have outlined a solution, and are continuing to work on the semantics. The solution has three parts. The first two parts result in very efficient implementations, but cannot always be applied. The third part is more expensive to implement, but will be needed only for more complicated systems. It is completely general, supporting in principle all known concurrent models of computation.

Parallel Implementations

Shuvra Bhattacharyya (now at the Semiconductor Research Laboratory of Hitachi America) and S. Sriram have developed a systematic methodology for reducing the overhead of synchronization (handshaking or semaphore checks) in parallel implementations derived from dataflow graphs. Three methods are used. The first and simplest is to remove redundant synchronizations. These are operations like semaphore checks that will always yield the same outcome at runtime, and hence need not be performed. The second method, called "re-synchronization," selectively adds synchronization operations that will then cause other synchronization operations to become redundant. Shuvra and Sriram have proven that the re-synchronization problem is NP-hard, but have established a correspondence with the well-studied set-covering problem, which provides a wealth of heuristic solutions. The third method converts a feedforward dataflow graph into a strongly connected graph in such a way as to reduce synchronization overhead without slowing down the execution. All three methods can be applied as post processing optimizations to the output of any static parallel scheduling algorithm. The results are reported in full [19] and in a conference paper [3].

Patrick Warner has implemented a target in the CGC (code generation in C) domain that produces code for a NOW (Network of Workstations) cluster. The generated code is built on top of the active message abstraction, and hence is portable and potentially quite efficient. Patrick has shown that the same set of parallel executables can be run on an ordinary cluster of networked workstations as well as on the specially configured NOW. Surprisingly, initial tests resulted in faster runs on the ordinary cluster, but further tuning has now achieved better performance on NOW. Currently, in the Berkeley NOW cluster, active messages are implemented on top of TCP/IP, so there is considerable communication overhead. However, as that facility matures, and this overhead is removed, we will be able to track it and improve performance. Patrick has identified steps that are required in order to get truly efficient parallel execution, and we envision this becoming a major part of his research. More details about this can be found via Patrick's home page, http://ptolemy.eecs.berkeley.edu/.

Syntax Management

We have begun development of a software architecture that will support modular syntax in much the way the Ptolemy kernel supports modular semantics. While the C++ classes in the Ptolemy kernel equip a software laboratory for experimenting with models of computation, this new layer will equip a software laboratory for experimenting with visual syntaxes and sophisticated design visualization techniques.

A prototype, called Tycho, is being written in [incr tcl], an object-oriented extension to the Tcl language that we currently use in Ptolemy for scripting. This extension has much the flavor of C++, but is an interpreted language compatible with Tk, the X-Window toolkit associated with Tcl. [Incr tcl] was developed at AT&T and is freely distributed, using a license agreement similar to that of Tcl/Tk and Ptolemy.

Tycho is an object-oriented front-end for the Ptolemy system, named after Tycho Brahe, the 16-th century Danish astronomer. The key objectives of the Tycho are:

The following specific tasks have been accomplished so far:
We have also made a number of improvements to the standard Ptolemy Interactive Graphical Interface (pigi), where we have continued to integrate more interactive capabilities by means of Tcl/Tk scripts and widgets. We have created interactive versions of the Logic analyzer, Gantt chart display, and image display. We have added TkButtons, TkShowEvents, and TkShowBooleans blocks for generating asynchronous impulsive events, display events, and displaying boolean values, respectively. The interactive plotting blocks now support zooming in a standard way. We have added a strip-chart widget that tracks the entire history of a signal in the discrete-event domain. The run control panel supports Tcl scripted runs of universes. The text entry panels are written in Tk and support a subset of the Emacs bindings.

Pigi can now represent scalable designs by means of Higher-Order Functions explained above. The run control panel can now report the execution time of a simulation. When new icons for blocks are created, visible labels will now be generated on the input and output ports if the block has more than one input or one output.

We have upgraded pigi to use version 7.5 of the Tcl scripting language and version 4.1 of the Tk window toolkit, and integrated version 2.0 of iTcl, an object-oriented version of Tcl. Alan Kamas has rewritten the Tk event loop, which responds to mouse and keyboard commands, to be based on a timer. The new timer-based event loop dramatically speeds up Ptolemy simulations and standalone programs generated by Ptolemy. The compile-SDF target has been fixed to generate C++ code for universes with Tcl/Tk blocks in them.

Signal Reprocessing

We have been collaborating with the Boston University/MIT RASSP team on signal reprocessing in Ptolemy. Signal reprocessing is where, based on the output of a signal processing operation, you adjust the parameters in the operation and process the same data again to get a "better" result. Adaptive filtering is an example. A more complicated example concerns estimating two sinusoids of unknown spacing. One way is to use the FFT and adjust the FFT length until the sinusoids are resolved (separated).

There are a number of ways to provide a general framework for reprocessing signals using the heterogeneity supported in Ptolemy. In Ptolemy, we can define an outer reprocessing system (galaxy) that decides how to change the processing parameters in the inner dataflow subsystems (galaxies). Before firing the inner dataflow galaxies, the reprocessing galaxy would reset the parameters of the inner galaxies. The reprocessing galaxy would act as a controller of the inner galaxies. In the current release of Ptolemy, we could define the outer-level controller using the (1) dynamic dataflow domain, and (2) the synchronous dataflow domain with a higher-order function mechanism that recompiles inner galaxies before invoking them. Two new computational models are being developed and investigated to serve as outer controller systems: (1) a finite-state machine domain, at U.C. Berkeley, and (2) an integrated processing and understanding of signals domain, at Boston University and U.C. Berkeley.

At the 1994 RASSP Conference, Joseph Winograd and Hamid Nawab from Boston University demonstrated a standalone radar clutter analysis testbed using the Integrated Processing and Understanding of Signals (IPUS) architecture to process radar data using expert knowledge encapsulated by computer. Over the last year, with the help of Wan-Teh Chang, Brian Evans, Edward Lee, and others at U.C. Berkeley, they have integrated the IPUS architecture into the Ptolemy environment as an IPUS domain. The IPUS domain has a dynamic scheduler that reacts to events (knowledge) registered in global data structures (e.g., blackboards) by local actors (e.g., knowledge sources). The IPUS domain reasons about knowledge at different levels of abstraction arranged in a hierarchy. Various local actors (e.g. knowledge sources) have been developed that can be reused in any IPUS application. The Ptolemy and BU teams plan a demonstration of the new IPUS domain running the radar clutter analysis testbed at the 1995 RASSP Conference.

Applications

Bilung Lee has designed a vector quantization library, which has been released in Ptolemy 0.5.2. He has also added three new image processing blocks (Dither, EdgeDetect and Contrast) into SDF images palettes, together with demos that illustrate their use.

Mei Xiao has made considerable progress on integrating an image processing library written by John Gauch of the University of Kansas into Ptolemy's SDF domain. This library uses a particular image format that is a superset of many popular formats and can represent gray-scale and color images in two and three dimensions. To support this, Mei has created a new image class, derived from the Message class. Mei is writing a set of blocks to make use of the library. Matt Tavis has also created a Tcl/Tk script to display, zoom, and save images that can be used in any domain.

A multiprocessor video signal processing system based on the Video Signal Processor has been donated by Philips Labs in Eindhoven. The system has arrived, and Alan Kamas, Christopher Hylands, and Xiao Mei have set it up.

We have added several new blocks for digital communications. For example, we now have a Scrambler and DeScrambler that can implement any maximum-shift-register polynomial up to order 31, and thus can implement all existing CCITT scrambler standards. These can also be used to generate pseudo random bit sequences or to make a correlated bit sequence appear to be white. Farhad Jalilvand has developed and documented several communications demonstrations, including Binary Frequency Shift Keying, Binary Phase Shift Keying, and Spread Spectrum. For the demonstrations, he has developed BFSK, BPSK, and Spread Spectrum transmitter and receiver subsystems.

Much of the design work in computing system parameters in multidimensional multirate systems can be simplified with a combination of computational geometry and integer matrix algebra. In multiple dimensions, rate-changing operations are defined by a change in sampling grids. Sampling grids can be represented as a set of basis vectors, which can be considered as the column vectors that make up a sampling matrix. Mapping one sampling matrix onto another is a linear mapping represented by a rational matrix, called a resampling matrix. We have shown how to design two-dimensional rate changing systems (upsampler, filter, and downsampler in cascade) based on a geometric sketch of the passband to retain. From the sketched region, we use computational geometric techniques [7] to find the minimal enclosing parallelogram, which we use to compute the resampling matrix to perform the sampling conversion. Then, we factor the resampling matrix into the upsampling and downsampling matrices for the rate changer [25]. The procedure will find the best compression rate based on a parallelogram-shaped passband. The only other admissible geometry is a hexagonal-shaped passband, thich will always do at least as well as the parallelogram-shaped passband. Generalizing this approach to multiple channels will enable the graphical design of two-dimensional filterbanks and wavelets. We have already shown how to apply integer matrix algebra to simplify the design of multidimensional filter banks and wavelets [26].

3. Events

Ptolemy Mini-conference

On March 10, we held a full-day
Ptolemy Mini-conference for about 50 sponsors and friends of the Ptolemy project. The main purpose of the mini-conference was to strengthen our industrial ties. The list of talks and the transparencies for the talks are given in the Presentations section below. The organizations represented among the attendees included:

RASSP Conference and Ptolemy Tutorial

In conjunction with Dave Wilson of Berkeley Design Technology, Mike Williamson, Brian Evans, and Edward Lee led a full-day tutorial on Ptolemy at the RASSP conference in Arlington Virginia. Approximately 25 people attended. The handouts for the tutorial are available in compressed Postscript form on the Ptolemy home page, at
http://ptolemy.eecs.berkeley.edu.

In addition, we staffed an exhibit booth on the exhibit floor, and gave numerous demos. Ptolemy also played a role in five other exhibits on the exhibit floor:

  1. Sanders wrote their own front-end to Ptolemy that allows a user to sketch a target parallel architecture and quickly map an SDF graph to the processors in the sketched architecture.
  2. Sanders wrote a new code generation domain for FPGAs that uses the DE domain to automatically insert registers to compensate for pipelining. They apply perl scripts to the resulting ptcl code to generate the FPGA layout in a Xilinx format.
  3. DQDT derived a new VHDL domain to serve as a front end specification and VHDL code generation environment for behavior modeling and synthesis of Application-Specific Integrated Circuits. They applied the same approach using Mentor Graphics DSP Station as a front end.
  4. Berkeley Design Technology wrote a layer on top of the Ptolemy kernel called Ptolemy HSIM (Heterogeneous Simulation) which serves as a simulation backplane that allows Cadence's Signal Processing Workstation, Cadence's Bones and Precedence's SimMatrix tools to cooperate during a simulation. SimMatrix is a synchronization mechanism for connecting 30 different VHDL and Verilog simulators together.
  5. Boston University demonstrated a signal-reprocessing environment called IPUS that has been integrated into Ptolemy. Ptolemy serves as the organizing framework, and will eventually provide (through its other domains) a computational engine. Signal reprocessing is a family of knowledge-based techniques for iteratively refining a computation by dynamically selecting the algorithms to be applied to the data on the basis of results provided by previous algorithms.

Software Releases

The Ptolemy 0.5.1 and 0.5.2 releases, which consist of approximately 2000 files containing 300,000 lines and 8 Mb of source code, were distributed in September of 1994 and May of 1995, respectively. The Ptolemy software environment now runs on the following architectures:

See our World Wide Web server http://ptolemy.eecs.berkeley.edu or finger ptolemy at eecs berkeley edu for complete information. Christopher Hylands has converted three of the four volumes of Ptolemy documentation to HTML format. HTML and Postscript versions of the documentation, together with updated summary sheets, answers to frequency asked questions, a quick tour, and a tutorial, has been posted to our World Wide Web and FTP sites. We have set up a Usenet read news group called comp.soft-sys.ptolemy. Postings to our mailing list ptolemy-hackers at ptolemy eecs berkeley edu are cross-posted to the comp.soft-sys.ptolemy and visa-versa. Postings to the read news group and the e-mail group are searchable from our World Wide Web site.

Outside Contributions

We have interacted with many researchers outside not directly involved in the Ptolemy Project. Within the University of California at Berkeley, we collaborate with the Infopad Project, which uses Ptolemy as a primary tool for simulating the operation of their hand-held portable computers. We also interact with the Network of Workstations group on distributed simulation and Prof. David Messerschmitt's group on the design and implementation of communications systems.

The list of contributions from outside U.C. Berkeley is quite long. Some users have tested alpha and beta releases of the software, and some have developed their own Ptolemy domains. Many users have helped us with porting Ptolemy to Linux, NetBSD, IBM R/6000, and DEC Alpha workstations. In terms of organizations, our primary interactions this year have included


4. Next Period Activities

The top priority topics that we will be addressing over the next year are:

5. Acknowledgments

We would like to give special thanks to the following people outside our group who contributed in significant ways to the results reported above:

6. Papers and Patents

The following patent was accepted: The following papers were published during the reporting period:
  1. B. L. Evans, Douglas R. Firth, Kennard D. White, and E. A. Lee, "Automatic Generation of Programs That Jointly Optimize Characteristics of Analog Filter Designs," Proc. of European Conference on Circuit Theory and Design, August 27-31, 1995, Istanbul, Turkey, pp. 1047-1050.
  2. B. L. Evans, S. X. Gu, A. Kalavade, and E. A. Lee, "Symbolic Computation in System Simulation and Design," Invited Paper, Proc. of SPIE Int. Sym. on Advanced Signal Processing Algorithms, Architectures, and Implementations, July 9-16, 1995, San Diego, CA, pp. 396-407.
  3. S. S. Bhattacharyya, S. Sriram, and E. A. Lee, "Minimizing Synchronization Overhead in Statically Scheduled Multiprocessor Systems," Proc. of Int. Conference on Application Specific Array Processors, July 24-26, 1995. .
  4. W.-T. Chang, A. Kalavade, and E. A. Lee, "Effective Heterogeneous Design and Cosimulation,," NATO Advanced Study Institute Workshop on Hardware/Software Codesign, Lake Como, Italy, June 18 ‹ 30, 1995 .
  5. S. S. Bhattacharyya, P. K. Murthy, and E. A. Lee, "Converting Graphical DSP Programs into Memory-Constrained Software Prototypes," Proc. of IEEE Int. Workshop on Rapid Systems Prototyping, Chapel Hill, NC, June 7-9, 1995 .
  6. A. Kalavade and E. A. Lee, "The Extended Partitioning Problem: Hardware/Software Mapping and Implementation-Bin Selection," Proc. of IEEE Int. Workshop on Rapid Systems Prototyping, Chapel Hill, NC, June 7-9, 1995 .
  7. C. Schwarz, J. Teich, A. Vainshtein, E. Welzl, and B. L. Evans, "Minimal Enclosing Parallelogram with Application," ACM Sym. on Computational Geometry, June 5-7, 1995, Vancouver, Canada.
  8. E. A. Lee and T. M. Parks, "Dataflow Process Networks, Proceedings of the IEEE, vol. 83, no. 5, pp. 773-801, May, 1995.
  9. J. L. Pino, S.S. Bhattacharyya and E. A. Lee, A Hierarchical Multiprocessor Scheduling Framework for Synchronous Dataflow Graphs, UCB/ERL M95/36, May 30, 1995.
  10. R. H. Bamberger, B. L. Evans, E. A. Lee, J. H. McClellan, and M. A. Yoder, "Integrating Layout, Analysis, and Simulation Tools in Electronic Courseware for Teaching Signal Processing," Invited Paper, Proc. of IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, May 8-12, 1995, Detroit, MI, pp. 2873-2876 .
  11. A. Kalavade, J. L. Pino, and E. A. Lee, "Managing Complexity in Heterogeneous Specification, Simulation, and Synthesis," Invited Paper, Proc. of IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, May 8-12, 1995, Detroit, MI, pp. 2833-2836 .
  12. K. Khiar and E. A. Lee, "Modeling Radar Systems Using Hierarchical Dataflow," in Proc. of IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Detroit, MI, May 8-12, 1995, pp. 3259-3262.
  13. T. M. Parks and E. A. Lee, ""Non Preemptive Real-Time Scheduling of Dataflow Systems," in Proc. of IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Detroit, MI, May 8-12, 1995, pp. 3235-3238. .
  14. J. L. Pino and E. A. Lee, "Hierarchical Static Scheduling of Dataflow Graphs onto Multiple Processors," Proc. of IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Detroit, MI, May 8-12, 1995, pp. 2643-2646 .
  15. The Ptolemy Team, "The Ptolemy Kernel‹ Supporting Heterogeneous Design," RASSP Digest Newsletter, vol. 2, no. 1, pp. 14-17, 1st Quarter, April, 1995.
  16. S. S. Bhattacharyya, J. T. Buck, S. Ha, and E. A. Lee, "Generating Compact Code from Dataflow Specifications of Multirate Signal Processing Algorithms," IEEE Trans. on Circuits and Systems I: Fundamental Theory and Applications, vol. 42, no. 3, pp. 138-150, March 1995.
  17. J. L. Pino, S. Ha, E. A. Lee, and J. T. Buck, "Software Synthesis for DSP Using Ptolemy," Journal on VLSI Signal Processing, vol. 9, no. 1, pp. 7-21, Jan., 1995 .
  18. S. S. Bhattacharyya, P. K. Murthy, and E. A. Lee, "Two Complementary Heuristics for Translating Graphical DSP Programs into Minimum Memory Software Implementations," Memorandum No. UCB/ERL M95/3, Electronics Research Laboratory, University of California, Berkeley, CA 94720, January 10, 1995.
  19. S. S. Bhattacharyya, S. Sriram, and E. A. Lee, "Optimizing Synchronization in Multiprocessor Implementations of Iterative Dataflow Programs," ERL Technical Report UCB/ERL M95/2, University of California, Berkeley, CA 94720, January 5, 1995. .
  20. P. K. Murthy, S. S. Bhattacharyya, and E. A. Lee, "Combined Code and Data Minimization for Synchronous Dataflow Programs," Memorandum No. UCB/ERL M94/93, Electronics Research Laboratory, University of California, Berkeley, CA 94720, November 29, 1994. .
  21. J. T. Buck, "Static Scheduling and Code Generation from Dynamic Dataflow Graphs with Integer-Valued Control Systems," Invited Paper, Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994.
  22. M. J. Chen and E. A. Lee, "Design and Implementation of a Multidimensional Synchronous Dataflow Environment," Invited Paper, Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994.
  23. B. L. Evans and J. H. McClellan, "Algorithms for Symbolic Linear Convolution," Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994, pp. 948-953.
  24. B. L. Evans, S. X. Gu, and R. H. Bamberger, "Interactive Solution Sets as Components of Fully Electronic Signals and Systems Courseware," Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994, pp. 1314-1319.
  25. B. L. Evans, J. Teich, and C. Schwarz, "Automated Design of Two-Dimensional Rational Decimation Systems," Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994, pp. 498-502.
  26. B. L. Evans, J. Teich, and T. A. Kalker, "Families of Smith Form Decomposition to Simplify Multidimensional Filter Bank Design," Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994, pp. 363-367.
  27. P. K. Murthy and E. A. Lee, "Optimal Blocking Factors for Blocked, Non-Overlapped Multiprocessor Schedules," Invited Paper, Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994.
  28. J. L. Pino, T. M. Parks and E. A. Lee, "Mapping Multiple Independent Synchronous Dataflow Graphs onto Heterogeneous Multiprocessors," Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, Oct. 31 - Nov. 2, 1994.
  29. S. Sriram and E. A. Lee, "Statically Scheduling Communication Resources in Multiprocessor DSP Architectures," Invited Paper, Proc. of IEEE Asilomar Conf. on Signals, Systems, and Computers, Oct. 31 - Nov. 2, Pacific Grove, CA, 1994.
  30. J. Teich, S. Sriram, L. Thiele, and M. Martin, "Performance Analysis of Mixed Asynchronous-Synchronous Systems", Proc. of the IEEE Workshop on VLSI Signal Processing, Oct. 26 - 28, 1994, pp. 103-112. Proceedings published as IEEE VLSI Signal Processing VII
  31. A. Kalavade and E. A. Lee, "A Global Criticality / Local Phase Driven Algorithm for the Constrained Hardware/Software Partitioning Problem," Proc. of Codes/CASHE 94, Third International Workshop on Hardware/Software Codesign, Grenoble, France, Sept. 22-24, 1994, pp 42-48.
  32. J. T. Buck, "A Dynamic Dataflow Model Suitable for Efficient Mixed Hardware and Software Implementations of DSP Applications,"" Proc. of Codes/CASHE 94, Third International Workshop on Hardware/Software Codesign, Grenoble, France, Sept. 22-24, 1994.

    The following papers were accepted for publication but have not yet appeared in print:

  33. S. S. Bhattacharyya, P. K. Murthy, and E. A. Lee, "Optimal Parenthesization of Lexical Orderings for DSP Block Diagrams," to appear IEEE Workshop on VLSI Signal Processing, Osaka, Japan, October 16-18, 1995.
  34. J. L. Pino, S. S. Bhattacharyya and E. A. Lee, "A Hierarchical Multiprocessor Scheduling System for DSP Applications," to appear IEEE Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, October 29 - November 1, 1995.
  35. T. M. Parks, J. L. Pino, and E. A. Lee, "A Comparison of Synchronous and Cyclo-Static Dataflow," to appear IEEE Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, October 29 - November 1, 1995.

    The following papers have been submitted for publication:

  36. G. Arslan, B.L. Evans, F. A. Sakarya and J.L. Pino, "Performance Evaluation and Real-Time Implementation of Subspace, Adaptive, and DFT Algorithms For Multi-Tone Detection," submitted to IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Atlanta, GA, May 1996.
  37. P. K. Murthy and E. A. Lee, "Extension of Multidimensional Synchronous Dataflow to Handle Arbitrary Sampling Lattices," submitted to IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Atlanta, GA, May 1996
  38. J. L. Pino, M. C. Williamson, and E. A. Lee, "Interface Synthesis in Heterogeneous System-Level DSP Design Tools," submitted to IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Atlanta, GA, May 1996

7. Presentations

These presentations do not include the presentations of the 23 conference papers listed in the previous section.
  1. Edward A. Lee, "Dataflow Models of Computation and their Application to Signal Processing," at INRIA, Sophia-Antipolis, France, October 3, 1994.
  2. Sangjin Hong, "Modeling Multiprocessor Architectures For Simulation," Ptolemy Group Meeting, September 1994.
  3. Mike C. Williamson, "VHDL Domain, and the Vantage and Synopsys Environment," Ptolemy Group Meeting, October, 1994.
  4. Christopher X. Hylands, "Advanced Use of Emacs," Ptolemy Group Meeting, October, 1994.
  5. Alan Kamas, "Ptolemy Software Environment," at the SRC Review, U.C. Berkeley, Berkeley, CA, October 24, 1994.
  6. Asawaree Kalavade, "Design Methodology Management," at the SRC Review, U.C. Berkeley, Berkeley, CA, October 24, 1994.
  7. Brian L. Evans, "Status of Ptolemy Project," at the SRC Review, U.C. Berkeley, Berkeley, CA, October 24, 1994.
  8. Brian L. Evans, "Multiprocessor Architectures for DSP," at the SRC Review, U.C. Berkeley, Berkeley, CA, October 24, 1994.
  9. Edward A. Lee, "Dataflow Process Networks," at IRISA in Rennes, France, October, 1994.
  10. Edward A. Lee, "Dataflow process networks and their application to parallel systems design." at INRIA, Sophia Antipolis, France, Nov. 17, 1994.
  11. Edward A. Lee, "Dataflow process networks and their relationship to synchronous languages," at the workshop on Synchronous Languages, Dagstuhl, Germany, Nov. 29, 1994.
  12. Edward A. Lee, "Dataflow Process Networks," at Aachen Institute of Technology, Aachen, Germany, Dec. 7, 1994.
  13. Brian L. Evans, "Automatic Design of Two-Dimensional Rational Decimation Systems," at the Georgia Tech DSP Seminar, Atlanta, GA, January 5, 1995.
  14. Brian L. Evans, "An Overview of the Ptolemy Project and Software Environment," at Georgia Tech, Atlanta, GA, January 9, 1995.
  15. Edward A. Lee, "The Ptolemy Project," at the RASSP Principal Investigators Conference, Atlanta, GA, January 10, 1995.
  16. Edward A. Lee, "Hierarchy of Dataflow Systems," at Georgia Tech, Atlanta, GA, January 12, 1995.
  17. Brian L. Evans, "Automated Design of Two-Dimensional Rational Decimation Systems," at the Industrial Liaison Program Conference, U.C. Berkeley, March 9, 1995.
  18. Thomas M. Parks, "Effective Scheduling of Process Networks," at the Industrial Liaison Program Conference, U.C. Berkeley, March 9, 1995.
  19. José L. Pino, "Hierarchical Static Scheduling of Dataflow Graphs onto Multiple Processors," at the Industrial Liaison Program Conference, U.C. Berkeley, March 9, 1995.
  20. Edward A. Lee, "Overview of the Ptolemy Project," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  21. Edward A. Lee, "The Ptolemy Kernel and Software Architecture," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  22. Asawaree Kalavade, "Design Methodology Management for System-Level Design," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  23. Brian L. Evans, "Symbolic Computation in System Simulation and Design," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  24. Mike C. Williamson, "VHDL Code Generation for Simulation and Synthesis," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  25. Shuvra Bhattacharyya, "Optimization Issues in Embedded Software Synthesis," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  26. Praveen K. Murthy, "Combined Code and Data Memory Minimization," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  27. S. Sriram, "Parallel Implementation," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  28. José L. Pino, "Real-Time Prototyping," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  29. Wan-Teh Chang, "Mixing Dataflow with Control at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  30. Thomas M. Parks, "An Introduction to a Mathematical Model of Dataflow, at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  31. Thomas M. Parks, "The Process Network Domain," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  32. Alan Kamas, "Preview of Ptolemy Versions 0.5.2 and 0.6," at the Ptolemy Mini-conference, U.C. Berkeley, March 10, 1995.
  33. Edward A. Lee, "An Overview of the Ptolemy Project," at Stanford University, April 13, 1995.
  34. Edward A. Lee, "A View of System-Level Design," at the Alta Group of Cadence Design Systems, April 13, 1995.
  35. Edward A. Lee, "A View of System-Level Design," at the DSP Seminar, UC Berkeley, April 14, 1995.
  36. Edward A. Lee, "A View of System-Level Design," at an SRC project review, Case Western Reserve University, Cincinnati, Ohio.
  37. Edward A. Lee, "Issues in Networked Signal Processing," at an ARPA workshop on Distributed Adaptive Signal Processing, Alexandria, Virginia.
  38. Edward A. Lee, "Introduction to DSP - A View of the Industry," DSPx, San Jose, CA, May 15, 1995.
  39. José L. Pino, "Hierarchical Static Scheduling of Synchronous Dataflow Graphs onto Multiple Processors," Invited Talk, Thayer School of Engineering, Dartmouth College, Hanover, NH, May, 1995.
  40. Edward A. Lee, "System-Level Design Methodology," DSPx, San Jose, CA, May 17, 1995.
  41. Wan-Teh Chang, "Introduction to Hot Java," DSP Design Group Meeting, June 14, 1995, Berkeley, CA.
  42. Edward A. Lee, "System-Level Design of Signal Processing Systems," presented as part of a tutorial on "Domain Specific Design Tools for DSP," Design Automation Conference, June 16, 1995.
  43. Edward A. Lee, "The University On-Line," talk at a Mini-Conference on Multimedia Networking at Berkeley on June 16, 1995.
  44. Brian L. Evans, "Interactive Solution Sets as Components of Fully Electronic Signals and Systems Courseware," at the Knowledge-Based Signal Processing Roundtable, Boston University, Boston, MA, June 30, 1995.
  45. Brian L. Evans, "Programming Computer Algebra Systems to Justify Their Answers and to Diagnose Errors in Incorrect Solutions," at the NSF Workshop on Revitalizing the Engineering, Mathematics, and Science Curricula Via Symbolic Algebra, Invited Talk, Terre Haute, IN, July 10, 1995.
  46. José L. Pino, "A Hierarchical Multiprocessor Scheduling Framework for DSP Applications," Invited Talk, AT&T Bell Laboratories, Murray Hill, NJ, July, 1995.
  47. Brian L. Evans, "Overview of the Ptolemy Project," Tubitak-Mamara Government Research Laboratory, Gebze, Turkey, August 24, 1995.
  48. Edward A. Lee, "Execution Policies for Dynamic Dataflow," at the DSP Seminar, September 6, 1995, U.C. Berkeley, Berkeley, CA.
  49. Farhad Jalilvand, "New Communications Demonstrations in Ptolemy," at the Ptolemy Group Meeting, U.C. Berkeley, Berkeley, CA, September 7, 1995.


Last updated 10/10/97. Send comments to www@ptolemy.eecs.berkeley.edu.