Ancient Research

This page dates from my first incarnation as a post-doctoral researcher with the Ptolemy project at UC Berkeley, working with Professor Edward Lee and the other people in the Ptolemy project.

Most of my time recently has been taken up with Tycho, the user interface to Ptolemy. I believe most of the really hard work in Tycho is now done, and am looking at new areas of work while completing and polishing the interface and graphical editors.

Tycho

Tycho is a user interface framework written in [incr Tcl], an object-oriented extension to Tcl. The core user interface framework was written by Professor Edward Lee; I have been responsible for extending the framework and for the components we use for building the graphical editors for Ptolemy. The other main author is Christopher Hylands; Tycho has also had key components written by a number of others, including Kevin Chang, Cliff Cordeiro, and Farhana Sheikh.

We have just recently split Tycho into packages, to better facilitate modularization and distribution of domain-targetted tools. This effort reflects our goal of keeping all new components of the Ptolemy project, including the new Java kernel, highly modular.

My main original contribution to Tycho is the 2D graphics editing support. The low-level graphics model, in a package called the Slate, extends the Tk canvas to add hierarchical canvas items and user input objects called interactors. Constructing hierarchical graphics is conceptually simple, but doing it efficiently in Tk and keeping and extending the user input model is a little tricky. Interactors were invented by Brad Myers of CMU, and provide a way of abstracting the handling of a series of user interaction events into a single object. The Slate's implementation of interactors allows interactors to dynamically attach and detach from objects, and so far the interactor model has been shown to support construction of quite complex user interaction scenarios.

The high-level graphics model is based on the model-view-controller architecture. The graphical block-diagram editors, for example, consist of a view (containing a Slate), two models -- one containing the semantic graph represented by the diagram and one containing the layout information (syntax) -- and two controllers, one for vertices of the graph (icons), and one for edges (connections or wires). This model allows relatively easy customization of the editor by subclassing appropriate components. For example, an editor that only allows certain kinds of objects to be connected to certain other kinds of objects can be implemented by subclassing the the edge controller and overriding the predicate that is called when the user moves the end of a connection over a connection point of an icon.

Places to go

An overview of Tycho
Slides accompanying a demonstration of Tycho to our major sponsor, March 1998. Despite the name, these slides are mostly about my own work :-) Be sure to read the "instructions" for the slides, or you won't see anything.

The Tycho User Interface System
A paper (the only Tycho paper...) presented at the Tcl/Tk workshop in 1997. A discussion at the workshop was largely responsible for the split of Tycho into packages.

The current release
I am afraid that the release on this page is very old, and that much of the work in which I am interested is absent, hard to find, or buggy. We are working hard to release a new, modular, version of Tycho together with the new version of Ptolemy this summer.

Screen shots
A few screen shots of parts of Tycho.

Fluent visual languages

I am looking at novel techniques that can be used to enhance fluency in visual languages. By fluency, I mean that, with a certain amount of practice, the interface directly supports interaction methods that enable one to rapidly, accurately, and enjoyably construct visual programs. Any techniques I implement will be evaluated with user testing on the Tycho visual language editors. Possible techniques to try include:
Gesture and free-hand recognition
Gestures could be used to extend the range of operations that can be performed in a visual language editor without using menus or having a mode-selecting toolbar. For example, a gesture over an item could be used to delete it, instead of clicking on the item and pressing the Delete key. Free-hand recognition could be used to provide a faster way of allowing the user to draw connections between icons.

Constraints
Full constraint systems are expensive and hard to use. But a small, carefully chosen set of constraints implemented with dynamic feedback in the drawing interface could enhance accuracy and fluidity.

``Higher-order'' interaction
In higher-order programming languages, a common programming technique is to apply a higher-order function to some other function; the higher-order function provides the structure of the resulting operation, and the first-order (typically) function provides the specific operation implemented. ``Higher-order interaction'' is a term I just invented for the same concept applied to user interaction. For example, I might select a set of icons or connections with a gesture to specify both the set of items operated on and the higher-order function, and use a second gesture or drag operation to specify the first-order function to be applied.

Dynamic visualization

Visualization is an important part of our current research project. I will be looking at the area of visualization with the goal of finding the appropriate techniques suitable for dynamically-evolving simulations. We may then construct a toolkit to assist in building domain-specific visualization tools.

I think the visualization problem in which we are interested in is a special case of more general visualization problems. In particular, when we run a simulation, we are extending an information space that includes, among other things, the results of previous simulations and the information being produced by other concurrent simulations. Our visualization task is thus one of providing appropriate filters over this dynamically evolving space.

The simulation itself is contained within this space, and so any view of the space can include all or part of the simulation program itself. Dynamic animation of Ptolemy II's mutable systems (which accounts for a lot of the tricky code in its kernel) thus becomes a special case of representation of dynamically-evolving data.

Joe Hellerstein in CS gave a talk at the ILP conference (March 11th, 1998), in which he described an approach to visualizing and monitoring database queries as they progress. He called it a ``crystal ball,'' as opposed to a ``black box.'' This is a wonderful metaphor and one which I would like to adopt for our visualization tools. Unlike a black box, a crystal ball lets you view progressively more refined representations of the data in which you are interested (from a database query in his case, a simulation in our case), instead of just the final, complete answer.

Simulation interoperability

If that isn't a mouthful, I don't know what is! This is not really research, but a pragmatic investigation into how to allow simulations in diverse areas to communicate and inter-operate. We need, for example, to be able (driven by our major sponsor) to be able to communicate with simulations in such diverse areas as micro-fluidics and micro-electromechanical systems.

The Ptolemy II kernel supports a very wide range of possible semantics, or models of computation, a number of which will be written and released in coming months. The challenge in this particular topic is, firstly, to discover what model of computation is implemented by simulations in these other domains, and secondly, to implement an interface in Ptolemy II that extends its services to these simulations. Initially at least, we will be trying to use CORBA as the platform upon which to build the interoperability services.


John Reekie
Last updated: 11 March, 1998