KENNETH LIM @ ART PAGES

 

CHAPTER 4

DIGITAL INTELLIGENCE:

PARALLEL FLOW OF MULTIPLES?

The goal for Artificial Intelligence1 is to mimic human intelligence by having an information processing power and knowledge system that is flexible and dynamic enough to operate within the open dynamics of a natural social environment, without comprehensively defining or mapping out relations of representations formally. The reason for such mimicry rests on our functional definition of computers. What do we want out of computers? It seems pointless to create an alien form of intelligence in computers that is totally outside the realm of human understanding and experience. The metaphorical nature of computer technology outlined by McLuhan and Powers (1989) supports the argument that computer technology is a tool that facilitated the changing of the environment inhabited by humans as they are influenced by the utility of the computer system. The focal point is always that of the human user, even though their demographics may differ, causing the emergence of computers with different levels of "intelligence" specified by market demands. The flexibility and dynamism of AI machines has to be such that its knowledge system updates itself internally, through learning and self-organisation when it collides with the social and material reality of its environment, through the process of interpretation. AI proponents are spread across a wide spectrum of research activity, reflecting the mixed confidence of ever discovering and formulating the human "mind" in the future.

AI research covers a broad spectrum, but its primary research centres around interactive computer technology to facilitate human-computer interface; a vital component for the future of computerization, because of the continuing trend of decentralization of the technological market into the homes, following the downturn of industrialization and the upward swing of the service industry and small business enterprises in this recessionary times. The corporatization of homes in search for the dollar has seen the networking of computer technology into homes with other corporations to control the capital flow of households into desired paths.

The seemingly infinite power of computer technology influenced a myriad of interpretation of its utility. An generally accepted definition of AI potential is John R. Searle's (1980) definition/distinction between "weak AI" and "strong AI". Weak AI uses computers as tools for modelling and testing theories of human cognition in an effort to understand how the brain works, how perception occurs, how memories are stored and retrieved, and so on. The quest of Strong AI is to develop a machine that not only mimics intelligence or consciousness, but to create a system that actually has the intelligence or the capacity to understand and develop a system of meaning.

Supporting neither the Strong nor Weak AI claims, this thesis works on the confidence that AI technology will develop a form of "machine intelligence" that is autonomous in its own right. The embodiment of cognitive processes differs between Computer Technology and Human, hence, we are stretching our expectations if we support the claims of Strong AI. Roger Penrose's claims against Strong AI in "The Emperor's new minds" (1987), is supported here in light of different information processing capabilities of a computer, where the information input obtained by a computer through its "artificial sensoria" such as a digital camera, is digitized into numericized arrays of information that are not differentiated from other inputs. In other words, the computational inputs from a digital camera is no different from a word-processing input. both comprising of only binary 1/0 computation. Artificial Intelligence will develop in Natural Language processing and other processing systems, such as neural networks, which imitate the procedural functions of the human brain. What comes out of this research will ultimately be "machine intelligence", that is dependent on a learning processes under the tutelage of humans. "Machine intelligence" will evolve around the idiosyncrasies of its own rationale and logic systems as it learns from humans, forming its own conclusions about the world as we show it by correcting their output responses. Humans are not all concerned about the "hidden layers" of computer processing systems, such as neural network systems. We are only interested in the behavioural aspects of computers, not its inner functions.

The most powerful feature will be the translation of the 1/0 binaries into another system of communication. As a term, AI is often a misnomer because the huge bulk of AI activity or supposed "machine intelligence" actually deals with "machine perception", the activity of "knowing" or "recognizing" the nature of its inputs for processing. Within human-computer interactivity, the inputs are samples of the physical reality of the user's world, converting an analog instance into a numerical value which is treated by the computer processor utility as a variable input that has to be processed mathematically to produce a desired output.

Such sampling facilitates the construction of possible worlds bearing a form of cross-world relation. Putnam defines cross-world relations as "two term relation...(where) its extension is a set of ordered pairs of individuals not all in the same possible world" (Putnam, 1990, p. 313). The corresponding relation between two entities in different worlds - one abstract (Cyberspace) and the other material (real World) - is vital to computer users, because the socialization of meaning and signification occurs has now shifted towards the abstract world. The contact hours between humans and the sampled possible world of the computer is gradually superseding the contact time between humans and their immediate material environment. Our future will be one where humans are initialized first by digital machines before being introduced to the material world. Hence, the divide between real and simulated experience diminishes, blurring the boundaries between fiction and fact.

However, the potential for intellectual and artistic excavation is infinitely better - a message when mediated properly, will have the most explosive impact on the receiver. Socialization processes will be redefined with social boundaries are delineated according to user-preference for stimulation within particles of social sites. The most interesting aspect of computer technology for artists is the seemingly infinite potential in creativity. Numerical representation within computer processes can be translated graphically into anything. Japanese computer artist Yoshiyuke Abe's work such as "Legend" (1992) are graphical representations of complex mathematical algorithms which are processed through the elaborate number-crunching facility of the computer. Although the abstraction of the algorithms bear no real reference or significance to both the artist and audience, their translation into graphical terms have enormous aesthetic appeal. The mathematical algorithms, that once belonged only to the fields of science are translated to appeal to human senses for signification, moving into the fields of the humanities.

Artificial Intelligence is about the art of creation - of creating a machine that is deemed "intelligent" enough to create, and to ultimately satisfy the needs of humans. The creative potential of AI technology is numerous - Cyberspace - the creation of a mathematical world that exists beyond the glass screen which keeps us separated. Virtual Reality - sampling of reality and more, Multimedia culture machines - integrated Audio Visual environments and Robotics, which takes autonomous machines a step further.

As argued earlier, there are a lot at stake in the technological race towards autonomous "thinking" machines. Japanese technologists are currently attempting to build the 5th Generation "intelligent" Computers to prove their ingenuity as world class innovators, and not mere imitators. The USA, on the other hand, have to reclaim their reputation in the competition of Technologies, which have been sliding in favour of the Japanese, and in many ways, with postmodern theories are beginning to regain lost grounds by attempting to work past the "visual/tactile space" of holism, discreteness and linearity of their traditions, and into the domain of the multiple centres and spheres in the "acoustic space" of the East. (McLuhan and Powers, 1989). McLuhan and Powers (1989) sees a crossover of both East and West, where the appropriation of technology by the East fragments the collective consciousness (acoustic space) of the society into individualism (visual space), whilst the Orientalism of the West grows. This crossover is best reflected by computer technologies, as they will embody the mutation of both traditions.

By moving towards better human-computer interface, AI technology becomes more biological, whilst testing cultural and philosophical "truths" through praxis. Philosophical theories about the nature of the human "mind" in praxis, and its relationship with the body are applied and exposed. These philosophies are found not just in AI research and development, but also in the appropriation and recycling of existing technology. The processes of change and evolution in Artificial Intelligence technology are not linear, but rather, are far more fluid. Hence, the development of Virtual Reality, Cyberspace, CyberArt, Expert Systems, Artificial life, etc.

At a micro level, the atomic dynamics of AI technology are often disturbed by their immersion into foreign environments that lacked the historical, biological and philosophical synonymy of its place of origin. For example, the appropriation of Western Technology by Japanese technologists - the use of declarative language PROLOG for its Fifth Generation Computers, pose a threat of dichotomizing theory and praxis. Such appropriation of foreign technology is massively problematic due to translation difficulties between language systems. Technology is an extension of the human body. The concept of mimicking humans to create closer human-computer interface asserts a lot of pressure on the translation of human activity into algorithms. Some arguments for technology assert that the limit of representation is in the mind of the beholder. (Winograd and Flores, 1987, p. 86), putting the pressure for a perfect Universal theory. (Anderson, 1983). AI technology betrays the philosophical positions of its innovators, as they embody theories of the body and mind, of the functional nature of computer technology and their relationship with humankind, and many other concerns. McLuhan and Powers' (1989) distinction of visual and audile processes of human perception, and the cultural biases towards one half of the dialectic between Japan and the United States of America, suggests that technologies are as diverse in species as cultures in both countries. The historicity of technology is crucial to its evolution. The philosophical theories of foreparent innovators, forms the gene pool or DNA structure that will dictate the form and basic characteristic of its evolution. Although technology mutates through its evolutionary process by the intervention of subsequent technologists, the fundamental principles of technology remains to form its foundation. When this technology is transplanted from its original environment into another, its development can either be boosted or stunted depending on the application. The underlying intention of AI machines, is to facilitate social, economic and material production - desire that is dynamic and can never be satisfied. The myth of satisfaction drives production, and facilitates consumption. The only currency of desire is that of Capital, a product of the economy. Economy is the process of Technology evolution (Rothschild, 1992, p.xiii). Rothschild (1992) gave a convincing account of this relationship between Economics and Technology. As technology moves towards the human subject, it inevitably evolves into the sphere of biology, which parallels the evolution of its socio-economic environment. AI technology has adopted biological concepts to explain some of its own phenomena.

COMPUTERS AS ONE MULTIPLE?

The multiplicity of applications within a computer system, especially within the arena of "Windows" software and desktop publishing cum video systems, supports Deleuze and Guattari's principle of the One Multiple (1987) where the interchangeable, multiple componential applications within the computer system defy categorization. "Windows" software allows a multiple grouping of applications according to the needs of the user, navigating from one file window to another so as to facilitate cross applications. Such phenomena also applies to the hardware where components such as graphics card, scan converters and others are connected to allow for multiple applications at any one time. The computer and user becomes a unified focal point where all computer related activities are carried out within the centre of the computer screen. This focal point can also be hidden from the user by overwriting the multiple screens with a dominant activity that occupies the entire computer screen, avoiding all unwarranted distractions from other applications.

Paul McCarthy (1992), identified the site where the Subject is dissolved in the process of postmodern atomization as an "assemblage of unlike entities which act and react mutually and successively with and against each other..." (de Sade, 1968) because "a Multiplicity is neither subject or object, only determinations, magnitudes and dimensions that cannot increase in number without the multiplicity changing in state..." (Deleuze and Guattari, 1987). The infinite multiplicity of the computer network, and multiple application software shatters the Subject and his/her sequential experience, by allowing fluid and multiple navigation within cyberspace for multiple experiences within a moment in time.

Deleuze and Guattari's One Multiple (1987) principle is of vital importance to our understanding of this new technological environment, because the threat of an explosion of multiples from the myriad potential of computer technology has to be dealt by our perceptual capacities. Somewhere within this multiple we must be able to perceive a form of wholeness, to be able to conceptualize the subject/object before we can generate some form of representation within our cognitive apparatus. However abstract a message, it has to be represented within a conceptual framework before any form of sense or reference (Frege, 1990) can be attained by the perceiver. If what we sense is a explosion of the Multiple, then what we get is a form of sensorial seizure - Madness- not knowing what we are seeing and not comprehending our multiple experiences happening all at once. McLuhan and Powers' (1989, p. 19) prescription for information theory "...data overload equals pattern recognition..." does not hold much ground if the data overload is randomized, causing conceptual violence for the perceiver and very hardly "pattern recognition". The patterns generated from data overload are often fleeting moments in the dynamic randomization process unless the data comprises of a multitude of inherent laws that predispose the interactive behaviour of the data.

 

FLOW AND CONTINUUM

What is interesting about interactive computer technology, is the blurring of the boundaries between art and technology, biology and physical sciences. Artistically anything is possible - as long as they are computable. Obviously this flies in the face of Darwinian theory of natural selection, where the mutation and evolution of species is dependent only on the survival of the fittest within a given competitive environment. What we are now doing with Science and Technology is to perform our own version of genetic engineering, where seemingly opposing objects/subjects are combined sometimes randomly to create something different. Nothing is sacred in this manipulation, even the utility of mutants. The use of disparate mathematical algorithms, such as those developed for refrigerators or other mundane applications are suddenly possible candidates for artistic expressions in the form of computer generated art. Electronic art is a field that has taken off in light of the spreading of computer applications across multiple platforms of Video graphics, digital sound, animation, Artificial Intelligence development such as voice recognition and computer vision. The integration of all communication platforms are introduced to the creative energies of humans to form a multi-sensorial, multi-faceted instrument for an integrated artform, where "all individuals, their desires and satisfactions, are co-present in the age of communications." (McLuhan and Powers, p. 94). Multimedia and integrated computer technologies generate meaning on the run. Signification happens at a particular moment in time within a particular context as the user interacts with the machine.

Susan Sontag (1987) in describing the "new sensibility" of the postmodern era, asserted that the "chasm" between the scientific and artistic "worlds" is "an illusion, a temporary phenomenon born of a period of profound and bewildering historical change". The "chasm" can only occur with the assumption of artistic stagnation in light of great scientific and technological leap. Sontag disagreed with this view as "the practice of these arts (modern arts) - all of which draw profusely, naturally, and without embarrassment, upon science and technology - are the locus of the new sensibility" (1987, p. 299). The boundaries between Science and Art has been blurred as both disciplines now demand similarly high level of creativity. Literary writers like William Gibson have inspired the Artificial Intelligence applications through his breakthrough novel Neuromancer. To Sontag, the "new classicism" is found in the "exploration of the impersonal (or transpersonal) in contemporary art" (1987, p. 297), where the subject/object distinction is renounced in place of a message or a concept. Art is now information about the world, without the sentimentality of the past. "The point is that there are new standards, new standards of beauty and style and taste. The new sensibility is defiantly pluralistic; it is dedicated both to an excruciating seriousness and to fun and wit and nostalgia. It is also extremely history-conscious; and the voracity of its enthusiasm (and of the suppression of these enthusiasms) is very high-speed and hectic (1987, p. 304).

The hybridization of technologies, promises an ultimate fluidity of inter-machine and human-machine interface where new and emerging phenomena/processes are created. Whilst such perfect interface can be considered utopian, it is crucial to maintain some positivism for research and development in our negotiation with the capital driven technologies that run our economy-driven societies. Such technologies also takes on the delivery of information for the masses, manipulating cultural information genetically (memes), mutating and evolving them to create new and novel structures and configurations, hence, the birth of multimedia and virtual reality technologies.

Continuity is crucial to the human experience, the flow of time, causal relationships are of primary importance to our perceptual apparatus. The ability of computer technology to simulate the flow of continuity through powerful transformation tools that have enormous creative potential for human expression. To Marshal McLuhan, motion pictures comprised of a blending of "Euclidean and acoustic thinking, of the mechanical and the electrical...(where)...the imagination is most creative in acoustic space...whose focus or "centre" is simultaneously everywhere and whose margin is nowhere." (McLuhan and Powers, 1989, p. 134). McLuhan's "acoustic space" is a non-linear 3 dimensional space in which the spheres of influence of subjects/objects are projected from centres, and in which processes are "related simultaneously" (McLuhan and Powers, 1989, p. 8). The acoustic space is a "fundamental cyberspatial conception with its creation of multi-dimensional environments, a spherical environment within which aural information is received by the CNS (Central nervous System)..." (Theall, 1992, [22]). The loss of margins and the mapping of spheres are ideal for the fluid process of change and metamorphosis in film art. Computer transformation processes, such as Morphing, collapse the physical boundaries of discrete objects by plotting a process of transformation that moves linearly from initial object to the desired finished object. The transformation moves in between are the combined mutation of both objects, where features are blended according to the closeness of proximity to the original object. Such metamorphosis are widely practised within the Animation and Sci-Fi Film Genres.

 

NOTES

 

1. Artificial Intelligence (AI) (Rich & Knight, 1991, Winston, 1992, Ginsberg, 1993, Luger & Stubblefield, 1993, Shapiro, 1992, Bundy, 1990, Barr & Feigenbaum, 1986, Sundermeyer, 1991, Webber & Nilsson, 1981, Kurzweil, 1990) consolidates all research theories and praxis about Cognition, Memory Systems, Neurology, Consciousness, Thought, the Unconscious, Human Physiology and Movement, Language, Speech, Vision and Hearing into a tangible workable form by embodying them in computer technology. Theories of human cognition are tested in the event of creating a mechanical "mind" or "consciousness", collapsing the boundaries between Science and Humanities. AI research activity evolves around the infinite space between Science and Humanities pursuits, between form and content. AI research is interested in extending the boundaries of Intelligence research to incorporate Applied Machine Intelligence. Underlying AI research is the assumption that computers will ultimately be intelligent in an autonomous or semi-autonomous fashion so as to interact with its users in a humanistic environment, hence being able to communicate with humans in their natural way using natural language and speech.

 

 

 

Chapter 1:   Attention for Sale: Capitalism and Interactive Computers
Chapter 2:   Defining Human-Computer Interaction
Chapter 3:   Representing our Worlds: Digital Translation
Chapter 4:   Digital Intelligence: Parallel flow of Multiples?
Chapter 5:   Japanese Philosophy & Artificial Intelligence Research
     
  Soundwaves  
  Conclusion  
 
Appendix   A, B, C, D, E, F, G
Bibliography    

Back

 

This page hosted by Get your own Free Home Page