Elegantiae Arbiter

O brave new world, that has such people in't"   W Shakespeare. 'The Tempest'

<BACK

1.1 While a universal Turing machine is fundamentally capable of performing any operation many operations are common to many tasks making it advantageous to have programs for them stored permanently. In practice these, particularly those concerned with controlling peripheral units and coordinating their activities with the central processing unit, are collected together in an 'operating system'. This is usually a multi-layered structure stored in microcode hard-wired into the CPU chip, in a read-only memory which can be replaced at need, or read into read-write memory from disk at startup.

1.2 The operating system is certainly the arbiter of the personality of a computer but its elegance may be more questionable. The basic problem with designing a system is that the convenience and speed of operations depends precisely on what the user intends to do. Since this may cover a multitude of sins unknown to the designer an uneasy compromise has to be reached, as in many other aspects of computer programming.

1.3 Operating system programs emerged into public view with the spread of minicomputers in the late 1960's when AT&T Bell laboratories started work on a system that became known as UNIX. This became the system of choice for many academic and research organizations and, through the US Advanced Research Projects Agency provided the foundations for the development of networking (ie multi-user, multi-machine systems) and Internet. In 1973 the source code was rewritten in the C language, newly invented by Dennis Ritchie (of Bell), and distributed free to universities so that thereafter UNIX could be run on any machine that supported a C compiler, resulting in a large increase in application, utility, and development tool programs available. The system has remained closely associated with the Open Standards and Open Source movements and today is represented in the desktop field by Linux, an 'amateur' Open Source look-alike that currently presents a significant challenge to Microsoft domination of desk-top system programs

1.4 The other major stream of operating system development arose from the arrival of microprocessors and the home computer revolution in the early 1970's. These early 8-bit processors did not have the resources to support a UNIX type system and in 1973 Gary Kildall (an academic) produced CP/M (Control Program/Monitor) which dominated the field for several years. In 1980 this was translated to 16-bit Intel 8086 assembly language (1) by Tim Patterson of the Seattle Computer Company and in 1981 licensed from SCC by Microsoft and IBM for the IBM PC. In the outcome further development was the antithesis of the UNIX scene - in the interests of captive customers and planned obsolescence source code was kept a closely guarded secret and documentation sparse. In common with other proprietary software of the time an attitude of 'take it or leave it' and 'if it doesn't work don't blame me, it must be your fault' became common.

2.1 The variety of tasks undertaken by modern computers is very large and their detailed anatomy is too varied to allow more than a superficial account. Some machines, such as network controllers, file-servers, and communication nodes, spend their working lives performing essentially the same task and are organized to maximize efficiency for these but the majority of current machines are used as general purpose 'personal computers' and have basically similar characteristics (2). A feature which is probably responsible for the rise to dominance of the IBM-PC 'desk-top' machine and its clones is the wide range of peripherals that it is capable of supporting. This in time has lead to increasingly complicated 'system programs' (as opposed to application programs which execute specific tasks) to control these effectively.

2.2 For the memory unit of the von Neumann architecture the basic logic switches that comprise the computer are connected in small hard-wired sub-units (ie 'programmed') to implement read-write memory elements for binary numbers (usually 8- bit or a multiple thereof) together with an address-selection system allowing random access to any one of them through a common set of connections known as the data bus.

2.3 The CPU function is separate from this and has three basic sub-systems

(a) a set of registers to hold data temporarily while it is operated on.
(b) a control unit to fetch instructions from memory and interpret them
(c) the ALU (a set of arithmetic/logic units) to execute them

2.4 In normal operation the sequence of events is
(a) The next instruction is fetched from a location maintained as an 'instruction pointer' in the CPU register set and interpreted ie the appropriate microcode is identified.
(b) the microcode is activated, fetching any data required from memory, performing the indicated manipulation using the ALU, and storing the result in memory when required. The instruction pointer is updated to the next store by default or to the indicated address if the instruction involves a program jump.

2.5 From time to time this sequence may interrupted by a tap on the shoulder requesting attention from the operating system, a peripheral unit which can't wait, or another application program if a multi-tasking system is active. In this case the normal sequence is suspended, the request for service dealt with by diverting to an appropriate program section, and normal service resumed where it left off when this is completed. In some recent 'pre-emptive' multi-tasking systems this may occur several times before the original task resumes, creating a complex chain of pending actions which can use large amounts of storage and time in saving and restoring the machine state each time.

3.1 Because of the need to maintain backward compatibility to some degree (to avoid alienating their customers totally) these operating systems have tended to grow by accretion from the earliest 8-bit microprocessors, adding new layers of increasing sophistication while retaining the old. The discernible archaeological layers are broadly, in chronological order

(a) ROM-BIOS (Read-Only Memory Basic Input-Output System)
This is normally installed on a separate chip so that it can be replaced with updated versions when necessary with, often, extensions installed in particular peripherals (which are automatically replaced if the peripheral is changed). The main chip is located so that its entry point is at a standard memory address used as the first instruction pointer when power is applied to the microprocessor and transients have had time to settle down. It includes two sections the Power-On System Test and a configuration utility which sets up rudimentary communication facilities with the outside world (e.g. keyboard. VDU, disk drives) The POST tests memory and performs a roll-call of the peripherals to identify which are present and stores data relating to their characteristics in an assigned area of random-access (read-write) memory (3)

(b) BIOS and DOS (Basic Input/Output System and Disk Operating System)
These are usually external programs stored on disk and read into Read/Write random access memory by a short 'bootstrap' loader stored on the disk as the last action of the ROM-BIOS program. The BIOS program checks again the memory and operability of peripheral units, then loads an optional file CONFIG.SYS which allows the user to modify the configuration or load additional drivers (ie programs which translate calls from CPU to the electrical impulses needed to operate the corresponding peripheral). Its last action is to initiate the DOS program (4) which processes the peripheral data previously collected into appropriate tables and installs additional translation programs to set up standardized calls for peripheral service which isolate programs from the individual quirks of the peripherals. Its penultimate action is to process an optional file AUTOEXEC.BAT which allows to the user to load and execute selected programs eg modify keyboard and VDU modes: its final act is to load and execute a command line processor, COMMAND.COM, which sits in a continuous loop waiting for keyboard input from the user.

4.1 Systems of this generic type dominated the scene from the mid 1970's until around 1990 when the so-called GUI (Graphics User Interface) began to enter the mass-market scene. The concept was around long before this, inspired by a paper by Vannevar Bush in 1945, and work was commissioned by the US Department of Defense (ARPA) between 1969 and 1975 (5) but CRT resolution was inadequate for the task until the mid-70's. The phrase is ambiguous since it may be taken to mean either an interface allowing use of VDU graphic capabilities or a general interface for users which employs graphic presentation of information. These tend to be antagonistic since attempts to use graphic capabilities involve a perpetual struggle between the two for control of the screen if used together.

4.2 A UNIX version X Windows was produced by workers at the Massachusetts Institute of Technology in 1984 which matured into a stable standard version in 1988. In line with UNIX culture this is Open Source (ie the source code is freely available) and is 'mechanism not policy' ie it runs on top of the UNIX command line kernel but the latter can be operated independently (this is not true of Microsoft (6) offerings). Microsoft produced their first attempt, Windows 1 running on the Intel 286, in 1985 but did not produce a version acceptable to most users until 1990 (Windows 3 running on the Intel 386)

4.3 The acronym WIMP (windows, icons, multi-tasking, pointer) is often used to characterize a GUI but the only completely new element in this is the use of icons since the other three were routinely used in command-line systems. The significance of the icons had to be learned, like command names in a DOS environment, and in many cases had little mnemonic value (like Chinese inscriptions in an Egyptian pyramid) and they were quickly augmented by including names, drop-down menus, and key-press activation. The major change in fact was to the scope of multi-tasking, which basically has nothing to do with graphic presentation .

4.4 Low level multi-tasking has always been an essential feature of machine control to deal with housekeeping tasks which occur at unpredictable intervals and have to be dealt with promptly. In some cases eg direct memory access for disk operations and screen refreshing these do not necessarily involve the CPU directly and it suffices to suppress CPU clock pulses on the next instruction fetch cycle and activate the direct connection between peripheral and memory without needing to save any data relating to the suspended task. In other cases, where an interrupt service subroutine requires to use the CPU, it is necessary to save the contents of appropriate CPU registers on entry and restore them before exit (eg keyboard and printer routines).

4.5 The objective of the changes was to extend multi-tasking to a higher level, to allow unrelated application programs to share the CPU time and run in the background. In this case the state of the program to be suspended is unpredictable so that, at the least, any memory areas used for temporary storage of data must be saved along with the CPU registers. A further problem for Microsoft was that most high-level system functions in DOS were not reentrant (ie they could not be reentered until any call already in progress was completed) so that in general these also had to be saved when interrupted. One solution is to save the suspended program and high-level system programs (7) in their entirety in a protected area of storage with the CPU register contents and screen image when appropriate (comprising a 'virtual machine') , restoring these before resuming the interrupted action. The heavy requirement for storage and time this approach involves is a large factor in the subsequent inexorable demand for larger memory and higher clock speeds.

4.6 The rapid growth of Internet in the mid 1990's created a new demand for cross-platform working (ie accommodating machines using different microprocessors (8) and operating systems as well as multiple programs and users ). The UNIX stream of development already provided this but the proprietary software stream (by this time dominated by Microsoft and PC systems) was still restricted to one user at a time. Attempts by Microsoft to provide an Internet service in the form of Internet Explorer have produced a confused situation involving legal and political factors, since their tactics have been ruled illegal under US anti-competition law. Since legal mechanisms grind much more slowly than technological development in the computer field and the legal profession has a keen eye for the profitable side of the asinine no clear resolution has emerged as yet. The Open Source community has produced a number of languages designed to facilitate cross-platform working, such as HTML, Java, JavaScript (no relation except in name),and PHP but by their very nature they tend to be substantially slower and larger than systems using native machine code and no clear favourite has emerged.

4.7 The use of cross-platform utilities has significant repercussions for computer security, particularly in a networking context, since the existence of such facilities can allow electronic intruders to take covert control of a machine. Some Internet users resort to disabling such facilities but downloading and scripting programs do not always seek permission before modifying a recipient machine's configuration so that the facilities can be re-enabled without the user being aware of it (9). It seems likely that controversy over this issue will escalate in the future in the light of the activities of an alliance of microprocessor manufacturers (10) and Microsoft plans to introduce the Palladium operating system which will force users to accept external control of their machines. Since this appears a direct threat to the Open Source movement (ie forcing them to acquiesce in control of the use of their products by commercial competitors) and is arguably contrary to privacy and anti-hacking law further legal/political battles can be expected.



1. This was sufficiently close to the 8080 that only minor changes were needed and mechanical translation sufficed for the most part


2. The details given here are based specifically on Intel processor machines of the large scale integration era (ie 8086 to Pentium) with PC architecture which dominate the present scene but most other machine share their general characteristics to a large degree


3. In some machines this function has been sub-divided by including storage for short hard-wired 'microcoding' programs to combine simpler ALU functions to give a wider range of more complex operations.


4. The name DOS (Disk Operating System) is a historical survival - the program quickly expanded to include a wide range of peripherals.


5. Subsequently research was continued at the Palo Alto Research Centre (set up by Xerox in 1975 alarmed by rumours of the 'paperless office'). A number of working examples were constructed but made no impact on the mass market due to the very high cost. The idea was taken up by Apple in 1980 in an attempt to stem a decline in sales due largely to the conviction of the founder and CEO, Steve Jobs, that everyone should follow the paths of righteousness and do everything his way. The high cost ($10000) and limited capability of the machine (Lisa) attracted few customers but the mid-80's its successor (Macintosh), optimised for DTP and word processing, enjoyed a limited success for a time.


6. Microsoft seem to have been trying to kill two birds with one stone - the UNIX command line set-up was multi-tasking and multi-user from the start and presumably did not have the same problems with GUI's.


7. There are simpler ways of making them recursive but these give 'first in, last out' operation which is not always acceptable.The alternative is to rewrite the system service routines from scratch, which was done in later versions at the cost of compromising the independent use of the command-line mode.


8. UNIX had already lead the way by rewriting the source code in C and distributing it freely but problems still arose in that proprietary compilers tended to include 'enhancements' which prevent code written in other proprietary dialects from compiling properly. This eventually produced an 'Open Standards' movement to control such extensions. The fact that a range of machine languages has to be accommodated and the approximation necessarily involved in handling real numbers (as opposed to integers) make strict compliance testing difficult and minor problems continue to arise.


9. Email services are often the target of such tactics and a potent means of distributing computer viruses.


10.. The Trusted Computer Platform Alliance - apparently set up this way to avoid the US anti-competition laws that trapped Microsoft - is backed by the music/film industry lobby, who possibly forsee a profitable era of 'pay per play'. The political/security establishment seem to see this as a long-sought opportunity to establish control over Internet activity but one of the leading proponents, the NSA, admits that in its own activities a 'trusted' feature is defined as one which allows security to be broken.