Quante Vadis

<BACK

"Now ,here, you see, it takes all the running you can do to stay in the same place. If you want to get somewhere else ,you must run at least twice as fast as that"
Lewis Carroll,  'Alice through the Looking Glass'

1.1 Apart from peripheral units needed to communicate with the outside world, current computers consist of a vast collection of identical electronically-operated semiconductor switches (1)  embedded in high purity silicon crystals. Operation involves opening and closing these in appropriate patterns and sequences. Early microcomputers contained only a few thousand such switches but current machines incorporate numbers in the hundreds of millions range. Ensuring that the required initial pattern is established when power is applied is fortunately a problem for the chip designer but some understanding of the long and tortuous connection between these patterns and user requests can be useful in assessing the suitability of a machine for a particular application and in understanding what is happening when the responses are unexpected. The casual user perhaps has limited options but, at the least, it is prudent to have some idea of what it is you are paying for: the best advice is often 'if it ain't broke don't fix it' (2)

1.2 The fundamental structure of a universal Turing machine comprised only two elements, a read-write store for symbols and program and a logic unit capable of changing the current symbol according to the current state of the machine. In its simplest terms the machine uses a binary symbol alphabet (a logical 'true or 'false' value) with three instructions viz read from memory, modify the symbol according to the current machine state and the program (representing the relevant transition matrix), write the result to memory. When it was realised that all logical and arithmetical operations can be implemented in terms of a single dyadic logic operation (either of a logical NAND or NOR) technological research could be concentrated on producing and optimising such a switch and practical implementation of the universal machine quickly became possible.(3)

1.3 The 'program' required to produce a specific Turing machine for a particular task subsists in the pattern of connections between the switches and for fixed programs these could be realised by chemically engraving a web of conducting connections within the surface of the silicon crystal. Life is too short to use the programming system originally described by Turing for the hypothetical machine and switches are invariably grouped in small units to produce a 'machine language' vocabulary of more complex operations which can be called up by a program stored in read-write memory.

2.1 The choice of a set of machine language instructions is an uneasy compromise which is ill-defined since what the users will want to do is unknown in advance. In practice this is invariably augmented by a set of higher level functions stored in read-only memory chips ('firmware') in both the CPU control section and individual peripheral units to allow replacement if necessary. These higher level functions serve the purpose of isolating the user from the low-level programming detail by setting up standardised interfaces for frequent operations. For example whatever the detailed machine instructions needed the information required to transfer data from memory to a magnetic disk, say, can be summarised as:-

(a) A transfer is requested
(b) Where is the data and where is it to go ?
(c) How much data ?
(d) I am ready to send/receive
(e) data transferred without error

A stored program which can act on this information and generate the necessary signals to accomplish the transfer shields the user from the individual quirks of peripherals and changes due to upgrading. In a few cases the microcoding is installed in EEPROM (electrically erasable ROM ) which can be modified electrically in situ.

2.2 Debate arises from time-to-time as to the relative merits of CISC (complex instruction set computer) and RISC (reduced instruction set computer) microcoding but the distinction is largely artificial in the light the presence of firmware. The term RISC was originally coined as an advertising gimmick by a latecomer anxious to get a toe-hold on the departing microprocessor bandwagon who was only able to produce a primitive device in time. The relative simplicity however reduced the production costs and allowed the chip to be run at a slightly higher clock speed. In practice however programs had to be longer to compensate for the missing microcode and there was generally no overall gain in speed but a lot more work for any programmer gullible enough to use it.

3.1 The early machines were large and expensive (4) and rarely seen by the users. The system programs were operated by specialists trained by the manufacturers and in the large institutions that could afford them a multi-user system was required. This was usually implemented by a batch file system whereby the users submitted programs, initially on punched cards or paper tape and eventually from remote teletype terminals; which were run individually ('in batches') as and when time allowed. A major issue with such systems is computer security, to avoid accidental or malicious damage to the operating system or other tasks, and user programs were typically run in a 'sandbox' which had read-only access to system programs and could only modify a specified area of memory not shared other active tasks. Even at the low CPU speeds these early machines could achieve the time taken to read input data and output results was much longer than CPU execution time, severely limiting throughput making auxiliary 'message concentrators' and low level multi-tasking necessary to use CPU time efficiently.

3.2 By the late 1960's the incorporation of transistors, integrated circuits, and magnetic core memory had reduced the size and cost of mainframe computers to the point where minicomputers became available much more widely. Since many of these did not have built-in manufacturers operating systems the ranks of system programmers were swelled by many 'amateur' programmers resulting in rapid advances in computer science (ie how to control and program computers).

3.3 The introduction of large-scale integration in the Intel 8080 and Zilog Z80 microprocessors in the early 1970's produced the 'home computer ' revolution and a further increase in the ranks of amateur programmers who, by the mid-80's, were often able to outdo the professionals. Subsequently the drive to increase speed and reduce size and power demands continued but fundamental changes in hardware were relatively minor and the focus of innovation shifted to software and firmware aspects.

3.4 In respect of operating systems the main thrust of commercial pressures has been to maximise functionality in attempts to produce a 'universal' machine which can be all things to all men. As a result operating systems have become so complex that they are largely unintelligible to most users and, one suspects, the system designers and programmers who produce them. The other side of the coin is that for many purposes a truly universal machine is overkill. Acquiring an over-specified machine in the expectation of a need to do something new and different in the future is rarely advantageous: it wiil be less efficient for the present and by the time you come to attempt the new task your ideas will have changed and there will be a better way of doing it.



1. It was not always so - thermionic valves and magnetic, pneumatic, hydraulic, and superconducting elements have been investigated in the past. It may not always be true in the future - systems based on organic molecules and photon actuated devices are currently being investigated in the search for smaller and faster systems.

2. This saw has been enthusiastically adopted by the computing community, but dates back to at least the hey-day of American vaudeville (1920, say) when it represented paternal orders on treating the family Model T Ford..

3. The advantage is practical rather than fundamental since combinations of other connectives work perfectly well but require more development effort. An essential feature is that gates should have some power gain so that each output can 'fan-out' to drive more than one input.

4. Early machines were costed typically at £1 per second run time, generating a culture dedicated to efficient programming. By the 1990's this had been reduced to 1p per second , making programmers time more costly than CPU time.