|Researchers:||Michael C. Williamson|
|Advisor:||Edward A. Lee|
The central problem in designing the parallel hardware implementation is mapping the task graph onto hardware resources which have yet to be synthesized. In order to minimize cost while meeting performance requirements, we take advantage of opportunities for resource sharing at the coarse-grain level of task granularity. Since there are fewer task nodes than in a fine-grain or arithmetic representation of the task graph, determining a near-optimal partitioning is faster in our approach than in an equivalent behavioral synthesis approach applied to a given application. As a result, there is a potential for more alternative design choices to be explored. A limitation is that it is harder to predict the effects of coarse-grain resource sharing on area and performance, and so there is no exact optimal solution to the constrained optimization problem.
Our approach also supports verification through simulation at the algorithmic level. This gives the designer confidence in systems partitioned into hardware and software. It also allows the designer to verify the hardware design within a co-simulation environment which provides the test vector stimuli and facilities for output data analysis. Our approach guarantees that the partitioning into hardware and software, and the intra-hardware partitioning, do not introduce deadlock or corrupt the synchronization between the various hardware and software components. This is an issue that many algorithm to implementation design tools do not explicitly address. We have described the requirements for cosimulation and have demonstrated its implementation within the Ptolemy simulation and prototyping environment.
An important stage in our approach is the interactive scheduling and partitioning phase. In a modern design flow, we are interested in providing feedback to the designer during scheduling and partitioning of the parallel hardware implementation. We are also interested in allowing feedback of decisions from the designer during this phase in order to allow fine-tuning optimization after the automated phase has performed the bulk of the planning work. Allowing some manual exploration gives the designer confidence in the quality of the results obtained during the automated phase by opening up those results to review and scrutiny, and by verifying that no obvious opportunities for improvement have been missed. Further benefit is achieved from allowing manual experimentation by teaching the designer how to think about tradeoffs and how to better utilize and control the automated phase. The result is the leveraging of the strengths of both the designer and the tool, rather than the replacement of one by the other.