DEJEUNER
* * *
TERINE MAISON
* * *
FAUX FILET MAITRE d'HOTEL
POMMES ALLUMETTES
* * *
SALADE
* * *
PLATEAU de FROMAGES
* * *
CHOUX au KIRSCH
* * *
Seillac, le 8 Mai 1979
After the initial presentations, a large part of the second day of the Workshop was spent in trying to focus on the main issues to be discussed in depth in smaller groups. Towards this end, presentations were made by Bob Dunn, Jim Foley, William Newman, and Martin Newell. These presentations and an edited version of the resulting discussion are given here as they highlight not only those areas that were considered worthy of discussion but also those areas that were considered and rejected for one reason or another.
The goal has been to gain as wide a perspective as possible. To that end, one component of the model for interaction is used, in some recursive sense as a guide to the whole. Specifically, interaction is seen as an intentional system (Fig 1). The two primary intentions are to:
From this view, the man brings to interaction a set of expectations and requirements for which fulfilment is sought within the machine. The task at the interface is to effect a sense of control in the feedback process between the parties. The machine is invested with the designer's interpretation of what the user seeks and knowledge (capability) to perform certain functions. Communication across the interface has the goal of achieving agreement (congruence) between the parties as to respective behaviours to be invoked and maintaining the mutual activities that result within the agreed-to behavioural/discourse domain (equilibrium).
Within the machine, each behaviour is modelled as a goal-oriented activity realised by a set of spanning (basis) sets of functions (tasks) that are (maybe) organised in some partial order. Behaviour within the machine is activated by a traverse of this lattice of tasks in some direction.
The next aspect in the model of interaction concerns a notion of connotation. There are two system-level connotative issues: first, the collection of reference terms, reference concepts, metaphors, representations, etc that are appropriate to the object of behaviour; second, the mode of behaviour relative to interaction:
Role
ACTIVE PASSIVE
POSITIVE A B
Attitude
NEGATIVE C
D
A: CONSTRUCTIVE ASSERTION/QUESTION
B: DESTRUCTIVE/CRITICAL ASSERTION/QUESTION
C: CONSTRUCTIVE DEFENCE/EXPLANATION
D: DESTRUCTIVE/CRITICAL DEFENCE/EXPLANATION
Both partners may have an active or passive role and a positive or negative attitude. Here the concern is to acknowledge that changes of initiative occur in interaction. Furthermore, each change in initiative may be accompanied by a change in role and/or attitude to the process of controlling the interaction. Shifts can occur from asserting and questioning as means of control to explaining as means of control. In another direction, shifts can occur from encouraging further discourse in a direction to discouraging the direction and vice versa.
Each task in a behaviour's lattice, also requires an aspect of denotative control (Fig 2). In the traverse of the lattice, a decision must be affected as to whether or not the task will (may) be executed or whether there is to be an alternative task(s) to be invoked. In fact, the decision to invoke an alternative can be a link into some point of another lattice (behaviour).
Tasks are modelled (Fig 3) as a function applied at some level (Fig 4) of discourse to some process in a spectrum (Fig 5) that ranges from perceiving the need for the function to initiating its execution.
The underlying concept is that either a variant of the function exists for each relevant point in the space of the task model, or a function is used in several ways where each way corresponds to a point in the space of the task model.
What is a methodology? It is, as a working approximation, a process or a procedure or a conceptual model for understanding and/or designing. Interaction comprises:
All these components are tied together by a user's conceptual model. Three of the position papers and Chapter 28 of Newman and Sproull seem to be expressing similar frameworks:
DUNN FOLEY MORAN NEWMAN
Intention Task
Connotation
Conceptual
Denotation Semantic Semantic User's Model
Rules Syntactic Syntactic
Command Language
Information Display
Constituents Lexical Interaction
The real design decisions are at the lower levels, where there are three common themes, semantic, syntactic and lexical. These terms are extrapolations of classical language themes into the graphical domain.
The design process seems to operate in a top-down, iterative fashion over the following levels:
At each level there are important, but quite different, considerations and decisions. However, human factors and psychology can help at each level.
The user's conceptual model is the set of basic concepts the user must understand to use the system. Examples are:
The semantic level is divided into an input and an output side. The input side contains the specific operations on the conceptual model, and their effects upon the model. The output side contains the particular information presentation techniques, such as the choice between bar charts, pie charts, tables, maps and wire-frame or hidden surface presentations.
The syntactic level is also divided into an input side and an output side. The input side contains the sequences of tokens (actions) required to specify the semantic actions. The output side contains the particular screen layouts containing the problem information and the prompts, menus and error messages containing the control information.
On the input side, the lexical level contains the groupings of lexemes into syntactic tokens. These lexemes are the basic hardware units, such as pen hits, keystrokes, knob positions and phonemes (for speech input).
On the output side, the lexical level contains the information encodings in terms of hardware units such as colour, intensity, linestyles, fonts and phonemes (for speech output).
Finally, an aside which is not for discussion at present - device abstractions belong at the syntactic level; the application programmer should be able to program the syntactic to lexical interface (i.e. the bindings of sequences of lexemes to syntactic tokens).
Perhaps we are trying to discuss a problem that is too difficult at the moment. The development of a general methodology for interaction is perhaps not a problem that we should tackle head on. Could I digress and tell you a story:
In the early days of hidden surface removal some people at Utah were trying to solve the problem by doing a massive sorting operation on about 200,000 polygons, in real time. They couldn't find the way to do it, and went around asking various people for ideas. One of the people they asked was a systems programmer in the Computer Centre, who took one look at the problem and said, It can't be done, you'll have to sub-divide the problem. No-one took any notice, so eventually the systems programmer went away and developed it into the well-known algorithm that bears his name.
The moral of the story is why do we not try to subdivide our difficult problem? I want to put up a strawman list of ways in which we can make progress. Let us identify some issues where we can make some progress. Let us recapture the spirit of Seillac I (and that was not what went on in the bar!}.
At Seillac-I, a bunch of people discussed the topic of Methodology of Computer Graphic Systems. They traded pleasantries (and insults) and eventually something interesting happened and people began to ask pointed questions. People were arguing about issues that were well understood (supposedly!). Similarly, I think we should try and disagree on topics that we have some common understanding about rather than agree about areas where we have little understanding. Perhaps we can raise some issues that we may be able to get some common understanding on.
Concluding the encapsulation of Seillac-I, we made great strides there in the Methodology of Graphics System design. The danger is that we may take it too seriously. If we look at Seillac I, they saw one outstanding problem in graphics (see Fig. 1).
We became too concerned about climbing the one peak because it was there. Some of us came away from Seillac I and worked on the Methodology of Graphics Systems while others worked on the CORE. The result was that the ones who worked on the CORE, because they proceeded more quickly, over-shadowed what was being done on the Methodology. We must try and avoid this problem here.
Let us look at the situation as far as interaction is concerned (see Fig 2).
Notice that some of the peaks in this case do not have dates on them indicating that they have not been climbed yet. Also, they are all about the same size. We should look around for ones that we can climb.
Let us raise the excitement level again and consider the following:
If we define a set of systems P and a set of users for each system then we can define the set of users for all the systems. I would estimate that this number of users, in a few years, would be of the order of 100,000,000. How do we describe user interfaces for all these people?
If we look at the number of applications and sum over all the systems for each application then this defines the total number of products. This is a much smaller universe.
Let us look at the complications of the large user base. Some products will have many users. Traditional design methods will not work. No longer can you have users coming to you and asking can you help me design this and then iterating until it is usable. We are now in the area of marketing. With a large user base, we will have to separate marketing from design and have a methodology to bind these groups together. We need mechanisms for specifying requirements and designing systems to meet requirements. One of the complications of a large user community is that the time spent in design will be insignificant compared with the time spent in user support, training. There will also be problems in respect of reliability etc. It will now be sensible to spend a great deal of time in the development of the product - will need to emphasise the ability to analyse task and to develop to a high level the ability to design user interfaces and evaluate them.
How does this influence what we should talk about? Many people have already tried to define what we mean by methodology of interaction. Do we need to specify it precisely? Perhaps we can agree on a set of strategies and codes of practice. It will help the person to design a user interface. I propose we break into smaller groups and discuss the following topics:
The difficulty before us at this time is to focus on some topics that we can get our teeth into. We must identify some specific topics for which we can measure our progress. We can then generalise from these later.
I came to this workshop to learn about man machine interaction.
I would offer you the following scenario.
Someone walks into your office who has done some machine code programming, knows a few algorithms and wants help with a new application. The question is, What do you say to him? I would suggest that there are several things that you would wish to tell him. Firstly one would tell him to implement a high level language on his machine. When he had done this one would talk to him about topics such as structured programming, top-down design etc.
Suppose that after gaining some experience in writing batch programs in a high level language he comes back and says he wants to write an interactive program. He knows who the end users are. What does one tell him at this time? This is a concrete question that we could address.
I would like to make a short digression.
There is a school of thought that says that databases are stupid programming languages in the sense that the designers of database systems present you with a set of primitives on which you can build operations, actions etc. However, there are limitations in this - the power of expression is limited and somebody will hit them at some instance. Somebody will always come back asking for some new facility. The conclusion is that Databases are considered harmful as implemented. Information bases should be at least as strong as a Turing Machine, in other words should be a programming system.
I would argue that the same considerations apply to interactive interfaces. Most interactive interfaces are stupid programming languages. People want conditional statements, repeat statements, etc. Command languages should therefore be considered harmful. Interactive systems should be programming systems.
Please note however that this is not the same thing as saying that one can learn from programming languages (semantics, syntactic, lexical categories, etc.) to understand interactive interfaces. I am saying something far more concrete than that.
Yesterday Alan Kay mentioned that he had written an application in six pages of SMALLTALK that would have taken fifty pages of Algol 60. I ask the question why is there this discrepancy? I want to suggest that it is because SMALLTALK has two essential features:
I would suggest the power of SMALLTALK is due to factor (2) not factor (1). SMALLTALK has no essential greater expressive power than ALGOL 60 has. Other languages could be equally powerful if cast in a similar programming environment.
The conventional environment is as follows:
The only items the user has access to is the user program. Only the implementor has access to the other parts of the system. The view provided by SMALLTALK is something akin to:
There is no wall built around the rest of the system. In SMALLTALK the user has access to all components of the system all the time. If you take the view that an interactive dialogue is indeed an example of a programming language, the tools that you provide to help a user write programs (editors, compilers, etc) should also be valuable and available to the interactive end user of the system. When a user starts to program in SMALLTALK he is already in an interactive environment and can specify those features in his problem that are incrementally different from the SMALLTALK environment. Thus the answer I would give to the question posed above would be:
One is then led to ask the question, What is a programming environment? There are several properties one can list:
My feeling is that a subgroup could profitably look into the requirements for a good interactive programming environment.