Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF SE ENG Alvey Transputers Literature
Further reading □ OverviewPrefaceAcknowledgementsParticipantsContents1. Introduction2. Unix3. Comparison4. Ten Years5. SunDew6. Issues7. Modular8. Standards9. Standards view10. Structure11. Partitioning12. Low-Cost13. Gosling14. Issues15. API WG16. API WG17. UI WG18. UI WG19. Arch WG20. Arch WG21. API Task Group22. Structure Task Group23. Future24. Bibliography25. Acronyms
CCD CISD Harwell Archives Contact us Heritage archives Image license terms

Search

   
InformaticsLiteratureBooksWindow Management
InformaticsLiteratureBooksWindow Management
ACL ACD C&A INF CCD CISD Archives
Further reading

OverviewPrefaceAcknowledgementsParticipantsContents1. Introduction2. Unix3. Comparison4. Ten Years5. SunDew6. Issues7. Modular8. Standards9. Standards view10. Structure11. Partitioning12. Low-Cost13. Gosling14. Issues15. API WG16. API WG17. UI WG18. UI WG19. Arch WG20. Arch WG21. API Task Group22. Structure Task Group23. Future24. Bibliography25. Acronyms

19. Architecture Working Group Discussions

19.1 INTRODUCTION

The membership of the Architecture Working Group was as follows:

The group worked loosely from the issues list assigned to them. The issues were used to delimit the topics for discussion, though within those topics the issues were not directly debated. At the end of a session, the group would tie their discussion back to the issues list and see which had been resolved.

The group met in four sessions, the fourth of which was a joint session with the Application Program Interface Working Group.

The first three sessions covered approximately the following ground:

After the second session there was a plenary session at which it became clear that there were some conflicts between the Application Program Interface Working Group and the Architecture Working Group in the terminology area and in the fundamental assumptions each group was making, for example: Can more than one process write to the same window? Thus a joint session was held with the Application Program Interface Working Group to attempt to resolve these questions.

The remainder of this chapter summarizes the group's discussions. The final report of the Working Group is contained in Chapter 20. The structure of the next section reflects the organization of the group's sessions given above.

19.2 DISCUSSION

19.2.1 Session 1

The first issue to be addressed was: Is the window manager responsible for all images on the screen? The general view seemed to be, yes, it is, and that all objects on the screen must be windows. Icons caused some difficulties, is an icon a window or not? It was agreed that if icons are not windows, then certainly they are the responsibility of the window manager at least as far as rendition and handling clicks on icons are concerned.

At this very early stage it became apparent that terminology and definitions were going to be significant issues - there appeared to be as many definitions of what a window is as there were window manager systems. In the Whitechape1 MG-I window manager, for instance, a window is made up of multiple rectangles which move together but are not all necessarily owned by the same application. The Application Program Interface Working Group would think of windows more as a unit of screen resource managed by a single application. This is an issue that is addressed in Chapter 22.

The discussion then turned to the subject of the graphics interface to the window manager. There are two basic models for this: first, that the application passes a (possibly virtual) bitmap to the window manager; or second, that it passes a structured display list. The SUN and Whitechapel window managers both use the bitmap approach. The window manager provides access to a bitmap that the client process can write into.

One idea that came up was to have two levels of graphics package, one to draw lines, text etc on the screen, the second at a higher application level, for example GKS. The drawback to this approach seems to be that to take complex data structures and draw them efficiently at the lower level requires rich functionality (for example, to deal reasonably with proportional spacing, kerning etc in text), and effectively what happens is that the functionality of the lower level is pushed up a level. With bitmaps you do not have this problem. There is, however, a need for some output capability in the kernel, for example to display messages during system booting.

It was generally agreed that separating the generation of text and images into two levels was the wrong thing to do. The next question was whether bitmaps should be constructed and passed to the window manager or whether the window manager and application should be allowed to call primitives. Ideally the window manager would generate graphics from a powerful general-purpose representation fed into it from the application. At present the general representation is the bitmap!

In the SUN system the main problem with having graphics primitives in a library in the user process is that even trivial programs are then 200-300Kbytes because the library is so large. The origins of this problem lie in not knowing what the output device will be at load time. The problem is exacerbated by not having shared libraries under Unix systems. Of the three options for where to put the graphics code, application, window manager or kernel, the last is ruled out because of the undesirability of putting complex code in the kernel. To put the code in the application really requires that the operating system supports shared libraries if substantial overheads in application program size are not to be incurred.

There was no general resolution of the problem of where to put the code; at the present time the answer seems to be that it depends on local circumstances and tradeoffs.

It was felt that there should be a better way of specifying images to the window manager than as bitmaps. One of the problems with bitmaps is that they are not device independent. Very few systems actually deal with device independence. The SUN does reasonably well but there are some problems when it comes to device independence between monochrome and colour devices.

Out of this discussion grew the first window manager architecture shown in Figure 19.1.

Device-dependent Application 1 Application 2 Library 1 Library 2 Centralised Window Manager or Display Manager Hardware

Figure 19.1

The portability of graphics primitives applies between the application and the library. If the library does not reduce the image to a bitmap, it would be possible to achieve device independence by having a separate device driver for each device in the centralized window manager. The centralized manager knows about the hardware and so this is a good place to contain device dependence.

If the window manager does contain graphics primitives it is important to get the functionality right. Should the functionality include circles, ellipses, conic arcs etc? There are neat schemes available for generating such primitives, but are these at the right level? The ISO standardization work on the Computer Graphics Interface has some bearing here.

The ability to generate images in places other than on the screen (in a saved bitmap say) was thought to be important, though more doubtful if the hardware can do antialiasing. SunDew, for example, generates images on a canvas which you can make visible on the screen, say, or the image might be transferred to a laser printer by DMA.

For some applications, the ability to work in terms of painting bits was thought to be important and a natural way to work. The GKS cell array primitive was seen by some as one possible way to achieve this. Cell array is effectively a virtual bitmap in an abstract coordinate system. By setting the coordinate transformations appropriately, cell array primitives can be mapped directly onto the hardware resolution bitmaps.

19.2.2 Session 2

The first part of the discussion in this session concerned redrawing.

There was general agreement that the window manager should be able to say to an application program redraw the contents of this window. The more difficult question to answer is when redrawing should be done by the window manager and when by the application. If the window manager handles redraw, then the window manager has to maintain an off-screen copy of each window. This is expensive for multiplane colour images. Some systems have the concept of a damage list which records the regions of the window which need to be redrawn when the window pops to the front. In general this facility seems to have been found hard to use by applications.

A complication for redraw is that the size of the window may have changed. Some systems do not allow window size changes, some applications do not support size changes. In some terminal emulators, changing the size results in the same information being displayed but at a different font size. Only the application program can know how to handle resize requests and it was felt to be important that an application should be able to ignore or refuse such requests.

The discussion for the remainder of this session concerned the vexed issues of structure and terminology.

The necessity for grouping windows was thought to be doubtful.

The Whitechapel window manager uses the term panel for an unadorned region. SunDew uses the term canvas, the Cambridge Rainbow display uses the term pad. The words cluster and window are used for adorned regions. An adorned region is a hierarchy of unadorned regions.

It was felt that panels should be clipped to the boundaries of their parents, though in some applications one might want to have children hanging off the side. There was some support for the view that adorned regions should not be restricted to rectangular shapes.

The client is given control of part of the screen, and three clipping rectangles: the physical boundary, the current clipping region and the maximum clipping region. The client cannot expand into the adorned region unless explicitly requested to do so.

19.2.3 Session 3

This session concentrated on input issues.

The first issue addressed was whether the window manager should support user-in-control, application-in-control, or both modes of operation. There was agreement that both modes should be supported; there was felt to be no danger in doing this.

The GKS input model was discussed as a basis for input in a window manager. The GKS model supports six classes of logical input devices, LOCATOR (returning a position), VALUATOR (delivering a value in some range), CHOICE (delivering a selection from a number of choices), PICK (identifying a group of primitives in a picture), STRING (delivering a text string) and STROKE (delivering a sequence of positions). Each logical input device may operate in each of three modes, REQUEST (in which the application is suspended until the operator supplies input from the specified device), SAMPLE (which gives the current status of the specified device), and EVENT (the operator may generate input asynchronously which is collected in a central queue which the application program may interrogate). The mapping of physical to logical input devices is the responsibility of the application program.

An immediate need for pointing devices, keyboards and valuators (knobs or potentiometers) could be seen. There was a view that the window manager should merely pass all input on to the application.

Feedback was seen as an important issue. To guarantee smooth feedback it seems to be necessary to do this in the window manager. It was recognized that downloading procedures to the window manager, to be executed in response to specific types of input, is an elegant way to control feedback.

The GKS input model was felt not to be rich enough in some respects. For example, there is a need to be able to treat single and multiple key clicks differently. This implies a finer grain of reporting than that provided by GKS.

In the Cambridge Rainbow terminal, an unencoded keyboard has been found to be very useful. The interface to this device is tailored so that the application can state which keys are to be encoded and which not. This is achieved through a code table.

In the GKS model, input events are entered onto the queue when some trigger device fires. The same physical trigger may control more than one logical device, though the standard does not specify how such clusters of logical input devices can be configured. Most existing implementations do not provide much, if any, control at this level. It was thought that a configuration language in the window manager, analogous to Cedar's TIP program, would be useful.

There was a general view that the GKS input model should be supported in some form. It was recognized that the GKS primitives are too restricted for some applications and lower level events should be reported, such as crossing region boundaries, depressing or releasing mouse buttons etc. It seems that a library of input tools would be a sensible way to present different input techniques.

The problem of where input is directed was addressed. The idea of input from a window being directed to a port was proposed and accepted. There was discussion of what should happen to input in the queue when the process connected to the port dies. There seem to be various answers to this question ranging from "destroy the input" to forward the input to the next process listening to the port.

Examples from different contexts were put forward. Some users rely heavily on the ability to type or click ahead, though this may in part be attributable to system response time - ie impatience. In the SunDew system feedback is given immediately by a PostScript function which can do echoing if it wants to. One system built in this way actually recognized termination commands for applications and switched listeners when a termination command was encountered. This can lead to some very undesirable situations, for example if a termination command demands confirmation. In some circumstances the onus must be on the application to close down the input queue.

There was a general view that the ability to download procedures to the window manager was a good way to tailor facilities to applications, for example to filter out all mouse events for a particular application. The same effects can be obtained through table-driven systems, though downloading was felt to be more elegant. However, there is still more work to be done in this area.

It was felt that the level of abstraction on the input side should be related to the level of abstraction on the output side. For example, the coordinates in input events in a particular panel should be in the same coordinate system as the output primitives used to create the panel. This was considered an important point.

Where to put menu-handling was seen as an issue. Putting menu-handling in the window manager was felt to be too restrictive unless it were programmable. Requiring everything to be programmable was felt to be too extreme a view.

Sneak paths were discussed at some length. Sneak paths are certainly used in window managers even if they are not always recognized as such. Rubber band lines done in the window manager are a good example of a sneak path. Often sneak paths are used to buy speed. Sneak paths attempt to anticipate the application program's interpretation of an event. They should be considered harmful because they will sometimes get the interpretation wrong and may mislead the user.

It was thought that events in the input queue should be timestamped.

There was felt to be a need for the window manager to be able to interrupt the application program as a general mechanism. Some application programmers want to be able to use this control mechanism. This implies a prerequisite for operating systems supporting window managers.

The pointing device should have at least one button. The group were intrigued to hear that someone has invented a pointing keyboard - there was speculation that the next pointing device will be a chair on wheels!

The Whitechapel MG-l system has the idea of a listening window. Input is directed to the listening window. The listening window can be changed by clicking on another window. The character stream does not automatically move with the mouse as happens on some systems. The Whitechapel system has the advantage that you can move the cursor outside the listening window whilst typing on the keyboard.

The group agreed to the use of the term listener for the port receiving input.

The major issue in this area seems to be the degree of graphics functionality below the centralized window manager. In the discussion following the presentation of the Working Group's report Peter Bono argued that the impact of distributed systems on the architecture needs to be addressed. If this is not done, it may well not be possible to take advantage of distributed systems techniques in realizing the architecture. David Rosenthal pointed out that there must be a configurable mapping of the keyboard to avoid the problem of applications wiring-in more than one mouse button.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site