KENNETH LIM @ ART PAGES

 

CHAPTER 2

DEFINING HUMAN-COMPUTER INTERACTION

 

Capitalism seals the relationship between humans and digital computers. Such coupling seems permanent in developed economies and the newly industrialized nations of Asia, as they adopt the path of technological development. This relationship is double-binding causing humans to adapt to technological demands, whilst digital computers are anthropomorphized, to "perceive" human actions accurately.

We often fail to realize the mutual relationship between humans and machines. Winograd and Flores (1987), quoting Heidegger, stressed the importance of concepts such as "thrownness", "readiness-to-hand" and "breakdowns" in the continuum of machine aided human activity. Humans, being dynamic, modify our behaviour to suit the demands of machine utilization, to achieve a state of functional transparency. This functional transparency ensures optimum continuum in a task oriented activity. When performing a task, we do not see a division between the subjective self, and the objective position of the computer. The machine and user are intertwined into a functional whole. The user has been "thrown" into a situation where he/she am intrinsically a part of. The user does not and have no functional need to consider the nature of the machine used nor it's components. The only time the split occurs is when there is a breakdown in the continuum. When the components fail to serve their function, or when we cannot perform the particular task due to ignorance, or both, that our nature is called into question. Only then do we have to assess different bits of the system to rectify the problem. Heidegger's concept of "readiness-to-hand" should be primary consideration, because it is able to explain a lot of questions about the expert/novice division in skill acquisition, and the development of expert systems, but it is also a potential guide in the search for a right platform for human-computer interface.

Human-computer interaction generally occurs at different levels:

Stratum 1. The lowest level of human-computer interaction, where intelligent computer behaviour is nil The interaction is unidirectional where with users manipulating computational processes, e.g., word processing, spreadsheet, drawing packages. The computer is a tool.

Stratum 2. Human-computer interaction is more sophisticated. Still information based, but more social, with more active and instructional computers that exhibit some form of intelligent behaviour. At the lowest levels, these computers uses more sophisticated computational theories, e.g., probability and formal logic, programs that utilizes basic mapping function and syntactic tricks such as ELIZA are programmed to respond to sentences keyed into the computer by the user. Instead of understanding what the sentences mean, they map inputs to their database to provide an output. These computers fake comprehension of its information inputs. Processing power varies according to the capacities of the computer hardware and software. At best, the development of more sophisticated technology will give us computer vision, speech recognition and thought patterns that are similar to humans. Stratum 2 is much more complicated, with different levels of possibilities:

 

2.1 Advances in Artificial Intelligence research will simulate intelligent behaviour in computers. Human-computer interface is deepened with machines understanding human speech and natural language, including multiple natural language translation. Computing techniques are more sophisticated, utilizing parallel or neural3 network processes to facilitate the spread of functions. Machine-machine interface and interaction are also enhanced with the capacity for multiple platform utility, such as computer networking and multimedia interactivity. Hence, the relationship is one of structural coupling (Maturana & Verales, 1980, p. xx and Winograd & Flores, 1986, pp 45-49) between human beings and "simulated humans," albeit different cultures.

2.2 Autonomous machines, with "minds" of their own. This stratum is hotly debated by AI proponents. A level of intelligence that is unique to the empiricism of the learning machines, e.g., an advanced Neural network (Stilling et al 1989 and McClelland & Rumelhart, 1981). This happens where the computer is capable of defining and establishing relations between things according to its own observations. The "mind" within the computer is not a simulation of a human "mind," but uniquely belonging to the computer, with its own intelligence and knowledge.

Stratum 3. Involve not only information systems, but robotics and other forms of mechanical synthetic bodies. The application varies from bionic body parts to free-moving, autonomous robots. The computer in this instance is generally a metaphor for the human species.

Stratum 1 was the first milestone of the computer revolution in the form of early Personal Computers and MacIntosh machines. As the processing power of such machines grew, the machines could handle more sophisticated processes. Desktop publishing sprouted in garages, Bulletin Board Services became communication centres for users with a modem and a keen interest with others in Cyberspace and recently, electronic voice mail, multimedia applications, desktop video systems are collectively moving to the masses. Stratum 3 is often considered to be high-end technology, costing millions of dollars, but moves are being made in various countries such as Japan and Australia to utilize that technology not only for medical applications, but for domestic and artistic purposes. Visionaries such as Donna Haraway(1985) and William Gibson(1984) have predicted the marriage of Stratum 3 with Stratum 2, and their subsequent fusion with the human body4. The marriage of flesh and machine within biological machines prompted Donna Haraway to declare we are "cyborgs" (Cybernetic Organisms) -- "hybrids of machine and organism": that technology "... will be the weapons in an even more fundamental revolt, against natural human limitations" (Paepke, 1993, p226). The working and reworking of the human body, through technological or biological engineering, as aided by the development of scientific research, results in a plasticity of the biological human body where the natural and synthetic is synonymous. Parallel to this is the path towards artificial biological systems, where machines not only simulate the processes, but complementing the biological body as a componential add-on. There are a host of ethical questions relating to the human-computer interface, some calling for the preservation of the biological. Questions of what is natural and what not are to be raised concerning theories and models of the human mind, such as the computer in Stratum 2. How natural is the learning environment in which the human mind is cultivated? Technology is often viewed to be an overwhelming phenomenon which humans are enslaved4.

Presently, Stratum 2 occasionally supersedes Stratum 1, whereby software has almost completely surpassed the entire operational capacity of first generation computers. A reading of interactive computers, for current purposes will have to start at the forefront of Stratum 2 applications. Virtual Reality, and other associated Stratum 2 technologies are moves in which computer technology closes in to the realms of humanity, integrating with and yet exploding human experience into atoms across infinite space. The extensions and projections of human sensoria across space and time are theorized by McLuhan and Powers (1989, p. 94), to be a form of human implosion where our organs and nerves are exteriorized in place of our shell, which has internalized. Our audio-visual senses are stimulated more intensely and collectively than ever before by machines that continually feed and inform us, initially through broadcast media like television, and now, with the personal computer and multimedia technologies that integrate all forms of generically different information technologies into one ubiquitous and versatile system. Myron Krueger identified the task for creating the ultimate human-computer interface to be the research into the human end of the equation, because of the relative stability of human evolution as compared to the "moving target of technology" (Krueger, 1993, p. 38). Krueger's position differs from many criticisms on the anthropomorphic nature of computer technological research and development because he avoided the nature versus artificial argument in his analysis. Instead, Krueger focused on the more stable component of the computer-human equation, humans. Technology must be developed to complement human activity, and not vice versa, because physically, humans are bound biologically whereas technology is crafted. For artists, the position adopted by Krueger is most favourable because it prevents the dichotomy of art and technology by centring on humanities as the locus for all computer activity. The interaction between humans with the new generation ("artificially intelligent") digital electronic computers (that mimics human intelligence computationally), redirects attention from traditionally output(result)-oriented functions of digital technology towards the front end of computer processing which deals with the interpretation of information input (Krueger, 1993), in an attempt to create a closer interface within a natural humanistic environment, whereby natural speech and language are used.

Krueger's (1992, 1993) position avoided the pitfall of Weiser's (1991) prediction of our computer-oriented future. Mark Weiser (1991) of the Xerox Palo Alto Research Centre, suggested that the future utility of computers lie in their ability to be ubiquitous machines that seeped into all facets of society, forming a kind of invisibility of usage, further suggesting that the domain of Multimedia and Virtual Reality goes against the future growth of ubiquitous computers as they make computers as the focal point of activity. The desktop computers, as we know them today, are centres where the particles of representation and activity are co-ordinated and assembled. The ubiquitous nature of digital technology will be the digital components that are incorporated into all other technological artefacts, so to enable easy inter-machine interface. Such interface will be atomic and dispersed across almost all aspects of the physical world. The TRON Project from Japan is developing a multitude of systems that will integrate all aspects of human activity and our material world into a electronic digital network system, collectively called the Highly Functional Distributed System (HFDS) (Appendix 7).

Initiated by Professor Ken Sakamura of Tokyo university in 1984, TRON (The Real Operating Nucleus) seeks to develop a fully integrated set of computer systems, software and chips to support these applications. In 1986 eight Japanese companies, Fujitsu, Hitachi, Matsushita, Mitsubishi, NEC, NTT Corporation, Oki Electric and Toshiba formed the TRON association. By 1989, TRON Association attracted 125 members including Motorola and IBM (Tatsuno, 1990, pp 155-161). The TRON initiative is open to all technologists who can develop an efficient HFDS that will serve to be the new industrial standard of future technologies. The scope of the TRON initiative spans much wider than our interest of human-computer interaction, but the coupling of machines and their users is crucial to the project, because the acceptance of HFDS will depend on the ease of utility.

Apart from the physical, the abstraction of digital information has a multiplicity of function resulting from its infinite transformation potential. We need to see the potential of Cyberspace (Gibson, 1986, and Benedikt, 1991) for exploring the human experience, especially when Cyberspace is externalized (Krueger 1993) creating a form of artificial or virtual reality that envelops the participants. Judging from the global research and development activities centring around interactive computers, multimedia and virtual reality technologies, it seems that the future growth of ubiquitous computers may just rely on the fact that they will have focal point of activities Cyberspace, the inner mathematical, rational and computational world of computers became the platform of human activity, instead of mirroring the external material reality where physical space and linearity of time are primary. Weiser believe that all the activity and processes carried out by computers must have real causal values to the material world, which is agreeable when the artistic application of computer technologies, however abstract, have to mean something to human inhabitants of the real physical world. With electronic art, the inner and outer worlds of the computers both have great significance and potential for creative endeavours. The disembodying effects of Virtual Reality machines allow users to journey through a multisensory narrative to produce a new form of human experience never before conceived or perceived. The worlds within such narrative capsules are limited to the imagination of the authors. Virtual Reality machines are the new roller-coasters of human senses, with the potential of exposing human sensory perceptions like a raw nerve, generating an enormous repertoire of experiences, pain, pleasure, fear and excitement.

The new breed of computers simulates human cognition through a series of algorithms (Stillings et al, 1989, Sharples et al, 1990) that interpret "natural" human communication as well as respond in a likewise "human" fashion that is understandable to the user. To achieve a high level of interaction, a decision was made by most manufacturers to model the "Second Generation" computers anthropomorphically: they are created to simulate human characteristics, to facilitate the socialization of computer technologies with humans. Computer users will only accept systems that are easy to use. Related research and development are found broadly under the umbrella term "Artificial Intelligence" (AI) globally, whereby the race is on to develop the most cost effective, and "intelligent" computer system for the mass market. The integration of computers into the realm of human experience relies heavily on the manner in which computer technology dominates the cultural environment of its users, not only within the material space where technology and humans exist, but also within the abstract and conceptual space of human cognitive environment. The push to develop the most comprehensive multi-functional and user-friendly computer system in a global war for technological and marketing supremacy by Industrialized nations of the world, fragments computer technological research and development activity along parallel evolutionary paths. (Freigenbaum & McCorduck, 1984) The responsibility of innovating novel and creative ways to communicate among ourselves and interfaces with machines lay in the hands of artists who continually explore the possibilities.

"The introduction of information technology appears to lead in changes in domestic socio-economic and cultural systems. However, this does not appear to be a function of the information itself (however this concept is defined), but rather it is the negative externalities attached to the production and consumption of information that appear to require re-design of our major institutions, including the State itself." (Benjamin, 1986, pp. 153).

How is computer-human interaction achieved and at what level? There are numerous theories attempting to answer the question, but we must journey through the main concerns of AI research to understand the issue. Focus is kept towards disparate philosophical theories underpinning AI research, development and appropriation in the USA and Japan, both technological leaders of the world with very different undertakings -- for evidence of their influence in defining the world and their relationship with others. This exploration is prefaced by our goal to utilize computer technologies within the arts arena by forming a symbiosis between the environments of the material world and the abstract Cyberspace of computers, linked by a human agent. Electronic arts were spawned out of computer technology, and their growing popularity indicates the pervasiveness of computers in society.

NOTES

1. "Humans have been shaped by nature, by the forces of natural selection. Our genes determine how we respond to the daily challenge of living. The process can be described by a deceptively simple equation that says, in essence that we are the sum total of the interactions between our genes and our environment." Graham O'Neil (1990, p. 43) writing for 21C, a magazine dedicated to the future published by the Commission for the future in association with ABC-TV's Quantum programme. In the same issue, 21C issued the following definitions:

 

Chromosomes: found within the nucleus of cells in all plants and animals, chromosomes contain genes, arranged in order along their length. Cells contain two sets of chromosomes, one set from the mother and one from the father. Each species has a fixed number of sets that make up its genome.

Genes: the basic units of inheritance which determine the characteristics of the individual. Genes are bits of information within the DNA molecule. Capable of mutation.

DNA: deoxyribonucleic acid, DNA is the main carrier of genetic information in living organisms. Most of the DNA of a cell occurs in the nucleus as part of the chromosomes. DNA molecules are double stranded helixes, consisting of two chains wound around each other. DNA is present as a single thread-like molecule in each chromosome.

Genome: the genetic map or code of a living species.

Protein: proteins play a fundamental role in the processes of life. They form hair, skin, muscle and cartilage. All enzymes are protein in nature and many hormones are proteins. Proteins consist of amino acids.

2. Algorithms: A sequence of well defined rules and instructions describing a procedure to solve a particular problem. Algorithms are implemented on computers via programs that express one or more algorithms in a manner understandable by computers (McCormack, 1993, p. 16). The abstract rule that expresses an action in a program is called an algorithm (McCormack, 1993, p. 27).

3. Neural networks are systems of competitive processing of input that is modelled against the structure of the human brain. What distinguished neural network systems from earlier computer models is that the primacy of rules in computational language is relaxed, and learning is carried out by the analysis of the output by a human judge. The systemic organization is carried out by the implementation of information threshold levels planted at input. If the actual output does not match the desired output, then the threshold levels at the active pathways are increased to increase the discrimination of input.

4. "Tetsuo", a movie by Japanese director, Shinya Tsukamoto (1989), portrays the human-machine relationship to be a violent one, where the integration of machine with the human body is a painful process of wounding and the subsequent loss of the vulnerable flesh. Such an interaction would mean the subsequent replacement of the naturally biological (flesh) with synthetically biological (metal). Tsukamoto's vision is antithetical from Haraway's, as Haraway celebrates the technological invention as a means of resisting the mortality of the flesh, whereas the flesh of the "Tetsuo" is overcome by a process of transformation, where the flesh becomes iron, and loses all biological characteristics. Donna Haraway(1985) posits that we are moving ahead into a fusion of machines with humans. AI research is used not just to develop intelligent machines, but to utilize machines biologically to boost human capacities. Technology in "Tetsuo" are big, grotesque machines that are clumsy and intruding, capable only of extreme violence, whilst Haraway's Cyborgs are the slick, small, almost invisible in its incorporation with the human body. The scenario proposed by both theses are extreme in their bias of technology. Within the context of computer technology, its utility with a host of complementary synthetic innovation has the potential to synthesize with humans without the predicted violence, through the interpretation of activities within the human body. This synthesis will be more subtle. Electro-magnetic or neuro impulses, bio-rhythms, speech and other muscular activities that are biologically generated messages which can be sampled by computer technologies for task performance. The role of the artist is to find a position between both extremes, so as to not negate the potential of technology, but to appropriate it in the best possible manner that will benefit humans without any form of violence. The interactive and negotiating processes found in human-computer interaction will be dynamic and potentially revolutionary. Electronic artists, are continually exploring this potential in their work seeking to find elegance in computer technology: elegance in performance, in interface, in interactivity, in expression.

 

 

 

Chapter 1:   Attention for Sale: Capitalism and Interactive Computers
Chapter 2:   Defining Human-Computer Interaction
Chapter 3:   Representing our Worlds: Digital Translation
Chapter 4:   Digital Intelligence: Parallel flow of Multiples?
Chapter 5:   Japanese Philosophy & Artificial Intelligence Research
     
  Soundwaves  
  Conclusion  
 
Appendix   A, B, C, D, E, F, G
Bibliography    

Back

 

This page hosted by Get your own Free Home Page