The Challenge of Digital Epistemologies

 

Colin Lankshear

 

Draft Paper presented at the Annual Meeting of The American Educational Research Association New Orleans, 3 April 20021

 Introduction

This symposium will broadly address the theme of Qualitative Research Methodology and Social Practice in Online Spaces in terms of what we call the 'challenge of digitization of everyday life'. The symposium has two overall purposes. The first, with which I will begin in this paper, is to briefly report some results of research conducted at points where physical space, cyber/virtual space and social practices intersect, in order to identify some typical examples of change pertinent to qualitative research of social phenomena.

The second main purpose is to address what we see as some key issues concerning research methodology, validity, ethics and epistemology, that arise in contexts of educationally relevant social practices mediated by new digital information and communications technologies (ICTs). We are especially interested in three facets here:

What Manuel Castells (1996) calls the information technology revolution is associated with profound changes in our conceptions and experiences of time, space, relatedness, and what it is to know things. This opening paper of the symposium will focus on some notable changes pertaining to subjects, objects and processes of knowledge that impact more or less directly and significantly on research processes. To begin with, then, let me mention a few examples of changes reported in contemporary literature associated with escalating 'digitization of the everyday'. I will focus here on what I see as four important dimensions of change:

Changes in 'the world (objects, phenomena) to be known' associated with the impact of digitization

Changes in conceptions of knowledge and processes of 'coming to know'

Changes in the constitution of 'knowers' which reflect the impact of intensified digitization.

Changes in the relative significance of, and balance among, different modes of knowing associated with digitization.

These changes will broadly frame our larger discussion in this symposium of implications for research methodology, the validity of research conducted at interfaces of social practice, cyberspace and physical space, and the value of such research for advances in educational theory and practice.

Changes in 'the world (objects, phenomena) to be known' associated with the impact of digitization

In Being Digital, Nicholas Negroponte gives the example of checking in at a place where he was asked if he had a laptop computer with him. He was asked how much it was worth (in case it got lost or stolen). Negroponte valued it at 1-2 million dollars. The check in person disputed the possibility of the machine having such a high value, and asked what kind it was. On being told she assigned it a $2000 value. Negroponte says that this reflects the distinction between 'atoms' and 'bits' as different kinds of stuff.

Not unreasonably, the person in the position of responsibility for property was thinking in terms of atoms. The laptop was the atom stuff and its value as atom stuff was $2000. Negroponte, being 'digital,' related the value of the machine entirely in terms of its 'bits'--it's 'content' in terms of ideas and patents potentials 'contained' (even the language gets tricky) as binary code some'where' on the hard disk. Depending on what was 'on' the disk at the time the value could have been anything--for instance, a research proposal in the process of development could be worth whatever the budget of the project would be.

Being oriented toward 'bits', Negroponte approached the world to be known and engaged in a different way from the person who asked him the value of the computer in dollar terms. It seems a safe hunch that the insurance policies of the organization in question insured atoms rather than bits. Insurance valuers probably understand what it means to assess atoms more and better than they understand how to assess bits.

Such examples raise all manner of epistemological questions and issues, many of which can be usefully thought about in terms of 'mindsets' (Barlow in Tunbridge 1994). In the physical world of atoms, it is customary to think in terms of finiteness and, as a consequence, in terms of concepts like scarcity, control, monopoly, zero-sum games and so on. When information is contained as atoms (e.g., in a book), for one person to have borrowed the library copy means another person cannot access it. This 'reality' and 'logic' impacts on our thought and behavior in all sorts of ways: from the importance of knowing to put books on reserve, to knowing the fastest way to the library after class to get the book before someone else does. When information is contained as bits, all sorts of alternatives arise. These include not having to know the fastest way to the library, how to navigate the Deweyan catalogue system and so on. By the same token, there are other considerations to take into account and things to be known and thought about.

Or, to draw on another example from Negroponte, what are the implications for how we approach and know the world when the stuff of something as common as a photograph changes fundamentally from analogue to digital form? Are color and image the same thing under these different regimes? If not, how do we experience the difference and respond to it in terms of thought, practice, etc.? Consider, for example, Negroponte's (1995: 14-15) account of a digital black and white photograph:

Imagine an electronic camera laying a fine grid over an image and then recording the level of gray it sees in each cell. If we set the value of black to be 0 and the value of white to be 255, then any gray is somewhere between the two. Conveniently, a string of 8 bits (called a byte) has 256 permutations of 1s and 0s, starting with 00000000 and ending with 11111111. With such fine gradations and with a fine grid, you can perfectly reconstruct the picture for the human eye.

Here again, the epistemological implications of this are massive. What is now involved in photographic knowledge, in judging the quality of images, in knowing how 'true' an image is, and so on? What are the implications for evaluative criteria, for participation in fine arts? What constitutes color? What is our concept of color once we have to think in terms of bits, resolution, software tradeoffs, etc.?

Neil Gershenfeld (1999: Ch. 2), provides a seris of parallel examples for sound that test conventional physical-analogue mindsets to (and beyond) their very limits. These involve the work of Gershenfeld and colleagues at MIT's Media Lab to produce a digital Stradivarius cello: a computerized assemblage of hardware and software that pivots around a 'bow' equipped with intricate sensors. In part the aim is to produce an 'instrument' that can produce the quality of sound a player gets from the original Stradivarius instruments--and thereby make it easier for more people to access musical instruments that play with the sound quality of a Stradivarius. The principles involved include digitizing massive amounts of information about how an instrument turns a player's gestures into sounds and converting these into a computerized model. The implication is that

once a computer can solve in real time the equations describing the motion of a violin [or a cello] the model can replace the violin. Given a fast computer and good sensors, all the accumulated descriptions of the physics of musical instruments become playable instruments themselves (Gershenfeld 1999: 40).

The approach involves taking a great instrument (like a Stradivarius) and a great player, putting the sensors on the instrument, recording the player's actions together with the sound, and then applying state of the art data analysis techniques to get the model. This kind of activity can have some unusual results, including one best recounted by Gershenfeld himself.

Soon after we were first able to model violin bowing with this kind of analysis I was surprised to come into my lab and see [a student] playing his arm. He has put the bow position sensor into the sleeve of his shirt so that he could play a violin without needing to hold a violin (ibid.: 42).

In addition, in collaboration with some of the world's leading musicians, the MIT team is aiming to produce instruments that make it possible to do things these musicians can conceive of doing but which cannot be done in practice because of physical limits in the 'real' world. For example, a cello can play only one or two notes at a time. Moreover, there are constraints to moving rapidly between notes played at opposite ends of a string, and there are limits to the range of sounds that can be made by bowing a string (ibid.: 33). Some of the musicians working with the Media Lab team wanted to explore what possibilities lie beyond the limits within which a cello, say, functions as the instrument we know. Experimentation aims at using digitized software and customized hardware to transcend existing constraints and enable musicians to conceive musical projects beyond these limits.

Such changes are ontological. They change the stuff of the world: cellos, sounds, and possibilities for musical composition. This poses questions of what becomes of musical knowledge, compositional knowledge, and unsettles conventional concepts and categories in music, in acoustics, in matters of musical technique and theory, and so on.

Changes in conceptions of knowledge and processes of 'coming to know' contingent upon deeper incursions of digitization into everyday practices

Two kinds of issues stand out here. One is associated with Lyotard's work on the changing status of knowledge with respect to the reasons for pursuing knowledge and the relationship between knowledge and 'truth'. The other is associated with issues of how we verify data that exists "at distance".

To date the most influential and widely recognized view of how knowledge itself changes under conditions of intensified digitizatiation has been Jean-Francois Lyotard's (1984) account of knowledge in the postmodern condition. Lyotard's investigation explores the working hypothesis that the status of knowledge changes as societies become 'postindustrial' and cultures 'postmodern': that is, as belief in the grand narratives of modernity diminishes, and as the effects of new technologies (especially, since the 1940s) intensify and become entrenched.

The two key functions of knowledge--namely, research and the transmission of acquired learning in schools and higher education institutions--change under these twin impacts, which are now powerfully intertwined. Specifically,

knowledge is and will be produced in order to be sold, and it is and will be consumed in order to be valorized in a new production: in both cases, the goal is exchange (1984: 4).

Knowledge 'ceases to become an end in itself'; it loses its use value and becomes, to all intents and purposes, an exchange value alone. With increasing digitization, the changed status of knowledge is characterized by such features as the following:

Lyotard sees some important implications and corollaries associated with this changed status of knowledge. In particular, as institutionalized activities of state and corporation, scientific knowledge (research) and education (transmission of acquired learning) are/become legitimated, in de facto terms, through the principle of performativity: of optimizing the overall performance of social institutions according to the criterion of efficiency or, as Lyotard puts it, "the endless optimization of the cost/benefit (input/output) ratio" (Lyotard 1993, 25). They are legitimated by their contribution to maximizing the system's performance, a logic which becomes self legitimating--that is, enhanced measurable and demonstrable performance as its own end.

Lyotard suggests that within this kind of regime the primary concern of professionally-oriented students, the state, and education institutions will be with whether the learning or information is of any use--typically in the sense of 'Is it saleable?' or 'Is it efficient?'--not with whether it is true. Notions and practices of competence according to criteria like true/false, just/unjust get displaced by competence according to the criterion of high performativity. In such a milieu the 'fates' of individual learners will depend on factors which vary according to access to new technologies. According to Lyotard, under conditions of less than perfect information the learner-student-graduate-expert who has knowledge (can use the terminals effectively in terms of computing language competence and interrogation) and can access information has an advantage. However, the more closely conditions approximate to conditions of perfect information (where data is in principle accessible to any expert), the greater the advantage that accrues to the ability to arrange data 'in a new way'. This involves using imagination to connect together 'series of data that were previously held to be independent' (Lyotard 1984: 52).

Current philosophical work being developed under the aegis of 'telepistemology' (Goldberg 2000) provides an interesting perspective on challenges facing the possibility of knowledge--coming to know things--within the burgeoning field of Internet telerobotics. The issues raised can be seen as a sub-set of a more general concerns about how far the Internet is a 'sufficiently reliable instrument' to serve as a source of knowledge (ibid.).

'Telerobotics' refers to remote systems where a computer-controlled mechanism operating at one end is controlled by a human at the other end (ibid.). Telerobotics has become accessible to Internet users who can access and manipulate remote environments via modem access to web sites where web cameras provide remote live images and controls allow participants to act on what they see.

Epistemological issues arise here around whether and when one can believe what one sees within a context (the Internet) where forgery is legion. Can one believe what one sees? And if one cannot necessarily believe what one sees, how much skepticism is it judicious to practise, and what problems might this pose for a model of knowledge as justified true belief as a standard for engaging with the Internet?

Goldberg contrasts the situation of telerobotics as found in TV news reports of NASA space probes with Net-based telerobotics. He uses the concept of 'authority systems' to distinguish the question of how far one can believe what one sees of Mars via TV news from how far one can unquestioningly accept the authenticity of telerobotics on the Web. The former, he argues, occur within contexts of operating 'disciplinary' and 'control' systems, which purportedly warrant confidence in the 'veridicality' (ibid.) of what one sees. While such authority systems are fallible--scientists have been known to cheat, and news networks to serve propaganda interests--the idea is that they nonetheless give us better warrant than does a medium (the Internet) which prides itself on maximum absence of such authority and control and where, to boot, significant numbers of participants spend their energies practising forgery and other forms of deceit.

Goldberg presents two vignettes addressing opposite sides of the same phenomenon.

1. The first vignette runs as follows.

Suppose, for example, that I visit an Internet site called the Telegarden that claims to allow users to interact with a real garden in Austria by means of a robotic arm. The page explains that by clicking on a "Water" button users can water the garden. Let 'P' be the proposition "I water the distant garden". Suppose that when I click the button, I believe 'P'. Furthermore I have good reason for believing 'P': a series of images on my computer screen shows me the garden before and after I press the button, revealing an expected pattern of moisture in the soil. And suppose that 'P' is true. Thus, according to the definition [of knowledge as justified true belief] all three conditions are fulfilled and we can say that I know that I watered the distant garden.

2. Goldberg's second vignette adapts long-standing philosophical arguments advanced against justified true belief by Edmund Gettier (1963) to the case of Internet telerobotics. The assumption seems to be that the Internet in general, and Internet telerobotics specifically, comprises an information source where Gettier's counterexamples get sufficient purchase to create real problems for justified true belief. In contrast to the first vignette

let 'P' be the proposition that I do not water a distant garden. Suppose now that when I click the button I believe 'P' and that I have good reasons: an expert engineer told me about Internet forgeries, how the whole garden is an elaborate forgery based on prestored images of a long-dead garden. Now suppose that there is in fact a working Telegarden in Austria but that the water reservoir happens to be empty on the day I click on the water button. So 'P' is true. But should we say that I know 'P'? No. But I believe 'P'. I have good reasons, and 'P' is true.

Problems attend both vignettes. Given the extent to which Internet forgery and fraud occurs, it's a safe bet that on many occasions the person in the first vignette will be deceived. On the other hand, just what degree of skepticism can we live with before the pursuit of knowledge (as justified true belief) actually becomes incoherent or, at the very least, unduly time consuming and impractical?

Philosophers will argue, of course, that philosophical skepticism has predated the Internet and telerobotics by millennia, and that so far as the philosophy of knowledge is concerned nothing substantial has changed with the advent of the Internet. The only changes have been contingent: namely, heightened dependence on other people's honesty when we use our senses to access empirical information--people can forge empirical data in a way we have to presume nature cannot--and, at the same time, massive evidence of Internet deceit and the emergence of entire subcultures devoted to perpetrating it in various forms.

So far as the practice of pursuing knowledge is concerned, however, such contingencies are important. And sooner or later philosophical models of knowledge (or of anything else) have to touch ground if they are to guide human behavior. This being so, we do in fact have some 'digital epistemologies' work to do here. Unfortunately, it is not especially clear what it is and how much there is of it to do. To see this point we can recall Lyotard's ideas about the changed status of knowledge and link it to the present issues.

The world of performativity is a world in which 'truth' seems to be far less of a concern than in the past. The object is to get things done, efficiently. We may have here a distinctively postmodern 'take' on Marx's famous 11th Thesis on Feuerbach: 'The philosophers have only interpreted the world, in various ways; the point is to change it' (Marx 1845-47). The driving motive behind the most powerful knowledge production these days is to create 'truths' rather than to discern them. At most, the latter seem to be in the service of the former. For example, research is widely commissioned by governments to vindicate policies and programs rather than to evaluate them openly. Consultants can make good livings doing 'research' that finds what their clients want to hear; or, at least, that does not find what clients do not want to hear. Massively funded research is undertaken to determine just how far it is possible to push frontiers in digital electronics and biotechnology (which, of course, involves discovering certain kinds of truths), not whether they should be pushed to where they can go (which involves other kinds).

To paraphrase Lyotard, access to perfect information being equal, imagination carries the day: imagination--and, to the same extent, 'truth' and knowledge--in the service of what James Gee calls 'enactive projects' (Gee, Hull and Lankshear 1996). Enactive projects are about bringing visions into reality, about making worlds in the image of visions. This has some very interesting and important ramifications for epistemology in a digital world of emerging 'cybercultures.' Two similar examples can illustrate the point here.

On April Fool's Day 1999 an Australian group foisted a benign scam using the Internet. They solicited shares in a bogus enterprise. The group was startled at the success of their scam, which they had concocted for pedagogical purposes: to show how easy it is to get conned. They returned every cent they received, but confessed to being surprised about just how many people were willing to part with so much of their money so readily. The other example concerns cases like the stock invested in e-enterprises like Amazon.com, despite open reports of heavy financial losses over successive years, and the stunning share-raising feats achieved by Internet outfits like Netscape and Yahoo.

In the face of such examples it makes perfect sense to ask what the comparative significance might be to the punters of 'truth' (in the form of something not being a scam, or the likelihood of their making a good profit on their shares), on the one side, and the sense of being a part of building something and making some history, on the other. Perhaps just being part of the emerging e-commerce scene--even if one is sometimes taken for a sucker--is more of a consideration than the 'truth status' of an enterprise. By comparison, troubling oneself about the 'veridicality' of some Internet telerobotic may be entirely trivial or beside the point for many (see also Sherry Turkle's account of her encounter with 'Julia', below).

This becomes relevant when we ask how far, and for whom, it is important to develop 'awareness' on the part of Internet users so far as matters of 'veridicality' are concerned. Perhaps the passionate drive to keep the Internet as free of authority and control as possible is the corollary of some conventional epistemological constructs having down the toilet--at least, for the meantime--along with some cherished grand narratives of modernity.

Changes in the constitution of 'knowers' which reflect the impact of digitization

Of the many observed related changes in the constitution of knowing and believing subjects--the 'bearers' of propositional, procedural, and performance knowledge--contingent on intensified digitization of daily life, two must suffice here.

The first has been well-recognized in a range of guises for some time now, within the contexts of 'new capitalist' workplaces and discourses, as well as in areas of inquiry like cognitive science, social cognition, and other neo-Vygotskian cognates (for an overview, see Gee, Hull and Lankshear 1996: Ch. 2). However, it has still to be recognized and taken up in any significant degree within formal education. This involves ideas like 'distributed cognition,' 'collaborative practice,' 'networked intelligence,' and communities of practice.'

Theories of distributed cognition, for example, have grown in conjunction with the emergence of 'fast capitalism' and networked technologies (Castells 1996; Gee, Hull and Lankshear 1996). Within work teams, for example, a collective may share between them the knowledge pertaining to particular processes, or for solving problems that arise. Such teams may operate within a particular area of an enterprise, or be dispersed across several areas. A further instance, identified and discussed by Lyotard (1984), is found in the role and significance of multidisciplinary teams in 'imagining new moves or new games' in the quest for extra performativity within, say, research. Increasingly, the model of multidisciplinary teams supersedes that of the expert individual as the efficient means to making new moves (Lyotard 1984). In addition, as described in Paul Gilster's (1997) account of 'knowledge assembly,' in the information-abundant world of the Internet and other searchable data sources it is often impossible for individuals to manage their own information needs, maintain an eye to the credibility of information items and so on. Practices of information gathering and organizing are often highly customized and dispersed, with 'the individual' depending on roles being played by various services and technologies. Hence, a particular 'assemblage' of knowledge that is brought together--however momentarily--in the product of an individual may more properly be understood as a collective assemblage involving many minds and machines. For instance, the knowing subject will increasingly make use of search engines, many of which employ bots: small, 'independent' artificial intelligence robots (Johnson 1997; Turkle 1995; Brown and Duguid 2000). These are composed of bits and bytes rather than screws and metal (<http://botspot.com/search/s-chat.htm>). They can move about in cyberspace and interact with other programs performing a range of tasks, including finding information that answers questions framed in natural language. AskJeeves is a well known example of a bot-based program (<http://www.askjeeves.com>). In addition, of course, we use all manner of search engines that employ Boolean logic to address our keywords, as well as customized newsfeeds and information feeds and editors that are often mediated by other human beings, as well as ones that operate as unmediated software programs.

Such examples pose problems for the notion that knowing, thinking, believing, being justified, and so on are located within the individual person (the 'cogitating' subject). This, however, is an underlying assumption of the justified true belief model, which construes propositional knowledge of 'P' as an attribute of an individual, A. Ultimately, schools too operate on this assumption at the level of their 'deep structure.' For all of the group work and collaborative activity that has entered classrooms in recent times, knowledge is seen in the final analysis as a private possession, and is examined and accredited accordingly.

The second example is a small-scale variation on the previous notion that to date exists mainly at an 'extreme geek' experimental level. It seems likely, however, to become much more common in the future. It involves people themselves, and not merely machines, being electronically wired together as networks by means of 'wearable computers.' Gershenfeld's younger colleagues in the MIT Media Lab provide graphic illustrations of what is at stake here. One, named Steve, wears a visor that covers his eyes and contains small display screens. 'He looks out through a pair of cameras, which are connected to his displays through a fanny pack full of electronics strapped around his waist' (Gershenfeld 1999: 45). Steve can vary his vision. When riding a bicycle in traffic he can mount a camera to the rear to view traffic approaching from behind, and when walking in a crowd he can point a camera to the footpath to see where he is walking.

Among the many extended applications made possible by virtue of the computer he wears is one that allows other people to see the data that goes to his display screens--via a web page from which others can access his recent images. By these means his wife can look out through Steve's eyes when he is shopping in the supermarket and help him select fruit, which is something he is not good at (ibid.).

This raises intriguing questions about what it means to know that a given piece of fruit is (or is not) of good quality, and to know how to select good fruit at a supermarket stand. In this case, multiple forms of knowledge are involved within the performance of selecting good fruit. Some of it, and only some of it, has to do with fruit. Much of it has to do with managing a wearable computer. As wearing computers becomes a more common practice, it seems almost inevitable that more and more knowing will become collaborative, networked, and distributed processes and performances. While we may be unable at present to foretell the implications of this for curriculum with much specificity, it is clear that they will be enormous, and that now is the time to start thinking seriously about possible scenarios.

Changes in the relative significance of, and balance among, different kinds and modes of knowing

Conventional epistemology has privileged propositional knowledge, and supported the overwhelming domination within classrooms of text-based knowing that. In principle, the book-centered 'modernist space of enclosure' that is the school (and, more specifically, the classroom) could support a more equitable balance between propositional knowledge and other modes and forms of knowledge--notably, procedural knowledge, or knowing how--than it typically does. Even so, the abstraction and decontextualization of classrooms from mature forms of authentic non-scholastic social practices has seriously limited the range of possibilities until recently.

Now, however, the proliferation of new social practices predicated on nothing more than networked computers and access to expertise (which follows almost inevitably from having access to online communities of practice) makes it possible to admit distinctively new forms of curriculum pursuits into classrooms that can emulate 'mature' versions of social practices in ways that the cooking and woodwork rooms rarely could. Understanding the importance of this, the extent to which it should be pursued in the name of 'education,' and what it may involve in practice, will call for rethinking epistemology in terms of the evolving digital age.

This section will briefly address just four of the many facets that are likely to become increasingly relevant and important here.

First, there will be a need within an ongoing digital epistemologies research program to investigate knowledge in relation to building, inhabiting, and negotiating virtual worlds. This will involve aspects of personal and interpersonal knowledge, as when deciding how best to represent oneself using avatars and whatever other means become available. To 'outsiders' this may seem a trivial matter, but to 'insiders' it is anything but. For some participants, at least for a while, it may be enough simply to choose from avatars made available by virtual worlds (e.g., as is possible in ActiveWorlds or Outerworlds). Others will want to create their own (as in the Avatar Factory or Cybertown Palace), deciding whether and how their avatar will reflect who and what they see themselves as being. According to Michael Heim (1999: no page)

When people enter these [virtual] worlds, they choose their avatar, determining how they will appear to themselves and to others in the world. Even in worlds where avatar parts can be assembled piecemeal into customized identities, the initial design of the parts still strongly affects the look and feel of the avatar. Avatar design not only affects the perception of the self but it also affects possible ways of navigating through the world and the kind of dwellings that are appropriate for the avatar.

Clearly, all manner of issues will arise here for identity knowledge, as well as for knowing where and when one is as one moves between virtual and 'real' worlds: as 'one' moves between 'being' atoms and bits. Categories like 'real' and 'location' mean differently across the different spaces. Virtual reality splinters our working concepts of 'real life.' The significance of the conceptual shakiness of 'real life' versus 'not real life' is exemplified by the fallout that surrounded the Tamagotchi fad of digital handheld pets that 'died' if not cared for properly.

In a future that looks certain to involve a lot more interaction between humans and more or less human-like 'bit-beings,' new forms of inter'personal' knowledge will become increasingly important. Early indications of terrain to be traversed here were documented by Sherry Turkle (1995: 16).

Many bots roam MUDs [multi-user domains]. They log onto the games as though they were characters. Players create these programs for many reasons: bots help with navigation, pass messages, and create a background atmosphere of animation in the MUD. When you enter a virtual cafe, you are usually not alone. A waiter bot approaches who asks if you want a drink and delivers it with a smile.

Turkle goes on to explain how she has sometimes--as have others--mistaken a real person for a bot because their actions and comments within the MUD seemed 'bot-like' or 'too machine like' (ibid.) And, conversely, 'sometimes bots are mistaken for people. I have made this mistake too, fooled by a bot that flattered me by remembering my name or our last interaction' (ibid.).

Turkle describes one very accomplished bot, known as Julia, who was programmed to chat with players in MUDs, to engage in teasing repartee, and so on (Turkle 1995: 93). She relates a study by Leonard Foner who describes how one person, Lara, reacted to Julia--both when Lara thought when Julia was a person and when she knew Julia was a bot:

[Lara] originally thought Julia's limitations [conversation-wise] might be due to Down's (sic) syndrome. Lara's reaction when she finally learns that Julia is a bot reflects the complexity of current responses to artificial intelligence. Lara is willing to accept and interact with machines that function usually in an intelligent manner. She is willing to spend time in their company and show them cordiality and respect. … Yet, upon hearing that Julia was a bot, Lara says she felt "fright, giddiness, excitement, curiosity, and pride". There was also the thrill of superiority:

'I know this sounds strange, but I felt I could offer more to the conversation than she could. I tested her knowledge on many subjects. It was like I was proving to myself that I was superior to a machine….'

Interestingly, Lara still refers to the Julia program as 'she.'

A second area for development with respect to changes in the relative significance of, and balance among, different kinds and modes of knowing is inchoate in the efforts of people like Michael Heim (1999) to wrestle in varying ways with conceptions and issues of 'multi modal truth.' How do we make sense of 'truths' that are expressed not in propositions but through multiple media simultaneously and interactively?

Since the invention of the printing press the printed word has been the main carrier of (what is presented) as truth. Mass schooling has evolved under what could be called a 'regime of print', and print more generally has 'facilitated the literate foundation of culture (Heim 1999 no page). Of course, various kinds of images or graphics have been used in printed texts to help carry truth (e.g., tables, charts, graphs, photographs, illustrations). However, Internet technology merges pictures and print (not to mention sound and developers are currently working on smell) much more intricately and easily than ever possible before. As Heim (1999 no page) puts it,

[t]he word now shares Web space with the image, and text appears inextricably tied to pictures. The pictures are dynamic, animated, and continually updated. The unprecedented speed and ease of digital production mounts photographs, movies, and video on the Web. Cyberspace becomes visualized data, and meaning arrives in spatial as well as in verbal expressions.

Of course, virtual worlds with their images and forms, the music found in a world or part of a world, the text one writes to communicate with others, the gestures and movements one's avatar can be programmed to make are thoroughly multi-modal in a seamless and 'natural' way. For example, if we teleport to Alphaworld, the music, the lush greenery and strong, sunlit colours suggest to us that this world is probably a happy place. Unlike Metatropolis with its eerie music, strange lurking figures and barren, nighttime landscape which suggests that one had better take care. Likewise, in Alphaworld there are no hidden tunnels and holes to get trapped in as there are in Metatroplis and requiring an exit from the world to escape. These worlds add up to wholes by means of sound, images, text, movement and change requiring the inhabitant or tourist (yes, there are tourists in ActiveWorlds!) to be constantly reading these constitutive elements in order to make sense of the world they are in.

A third consideration inviting us to reassess the relative significance and balance among multiple modes of knowing and forms of knowledge is the idea of an emerging 'attention economy' (Goldhaber 1997). If people in postindustrial societies will increasingly live their lives in the spaces of the Internet, their lives will fall more and more under economic laws organic to this new space. Michael Goldhaber (1997, 1998a, 1998b), among others, has argued that the basis of the coming new economy will be attention. Attention is inherently scarce and it moves through the Net.

The idea of an attention economy is premised on the fact that the human capacity to produce material things outstrips the net capacity to consume the things that are produced---given the existing irrational contingencies of distribution. For the powerful minority of people whose 'material needs at the level of creature comfort are fairly well satisfied,' the need for attention becomes increasingly important, and increasingly the focus of their productive activity.

[T]he energies set free by the successes of … the money-industrial economy go more and more in the direction of obtaining attention. And that leads to growing competition for what is increasingly scarce, which is of course attention. It sets up an unending scramble, a scramble that also increases the demands on each of us to pay what scarce attention we can (Goldhaber 1997: no page).

Within an attention economy, individuals seek stages - performing spaces - from which they can perform for the widest/largest possible audiences. Goldhaber observes that the various spaces of the Internet lend themselves perfectly to this model.

The importance of gaining attention has been extended to enterprises operating in the growing Network Economy. NCR's 'Knowledge Lab' (<http://www.knowledgelab.com>) is an early player in the domain of identifying the kind of knowledge needed to gain the attention of consumers--who face a glut of information relevant to their requirements--and to pay to consumers the kind of reciprocal attention that will generate brand loyalty. According to the Knowledge Lab:

Attention will be an increasingly scarce commodity. Firms will have think of themselves as operating both in an Attention Market as well as their core market.

Attention will be hard to earn, but if it is viewed as a reciprocal flow, firms can use information about consumers and customers to stand out in a sea of content to increase profitability: pay attention to them and they pay attention to you. Relationships are likely to encompass attention transactions. As customers realize the value of their attention and their information needed to get it, we show that they may require payment of some kind for both.

The Knowledge Lab is looking into how we can quantify, measure and track flows of attention in the Network Economy (<http://www.knowledgelab.com>).

What kind of knowledge will be advantageous for operating in the attention economy? Goldhaber argues that in a full-fledged attention economy the goal is simply to get enough attention or to get as much as possible. This becomes the primary motivation for and criterion of successful performance in cyberspace. Generating information will principally be concerned either with gaining attention directly, or with paying what Goldhaber calls 'illusory attention' to others in order to maintain the degree of interest in the exchange on their part necessary for gaining and keeping their attention.

Beyond this, Goldhaber (ibid.) argues that gaining attention is indexical to originality. It is difficult to get new attention 'by repeating exactly what you or someone else has done before.' Consequently, the attention economy is based on 'endless originality, or at least attempts at originality.'

Some challenges facing conventional epistemology

While all sorts of variations and complexities exist around the kernel of 'scientific knowledge' (e.g., falsificationism vs verificationism, niceties of validation, representation, interpretation and so on), it seems fair to say that to a great extent the trappings of a long established model of knowledge commonly known as "justified true belief" still dominate research methodology at the level of practice. This is especially true within higher degree research programs.

Knowledge as justified true belief is concerned with propositional knowledge and is typically rendered as a simple set of necessary and jointly sufficient conditions.

According to this epistemology, for A (a person, knower) to know that P (a proposition)

The ideas raised above pose some serious challenges for this epistemology and for sedimented qualitative research practices that remain to a large extent based upon it. I will identify very briefly five challenges.

1. The standard epistemology constructs knowledge as something that is carried linguistically and expressed in sentences/propositions and theories. The multimedia realm of digital CITs makes possible--indeed, makes normal--the radical convergence of text, image, and sound in ways that break down the primacy of propositional linguistic forms of 'truth bearing.' While many images and sounds that are transmitted and received digitally so still stand in for propositional information (cf. Kress' notion of images carrying complex information mentioned above), many do not. They can behave in epistemologically very different ways from talk and text--for example, evoking, attacking us sensually, shifting and evolving constantly, and so on. Meaning and truth arrive in spatial as well as textual expressions (Heim 1999), and the rhetorical and normative modes challenge the scientific-propositional on a major scale.

Michael Heim (1999) offers an interesting perspective on this in his account of what he calls 'the new mode of truth' that will be realized in the 21st century. He claims that as a new digital media displaces older forms of typed and printed word, questions about how truth is 'made present' through processes that are closer to rituals and iconographies than propositions and text re-emerge in similar forms to those discussed by theologians since medieval times. Heim argues that incarnate truth as the sacred Word is transmitted through a complex of rituals and images integrated with text-words. In the case of the Catholic church, for instance:

communal art is deemed essential to the transmission of the Word as conceived primarily through spoken and written scriptures. The word on the page is passed along in a vessel of images, fragrances, songs, and kinesthetic pressed flesh. Elements like water, salt, and wine contribute to the communication. Truth is transmitted not only through spoken and written words but also through a participatory community that re-enacts its truths through ritual (Heim1999: no page).

The issue of how truth is made present in and through the rituals of the community of believers-practitioners has been an abiding concern of theologians for centuries. Is the presence of incarnate truth granted to the community through ritualized enactment of the sacred word real, or should it be seen as symbolic or, perhaps, as a kind of virtual presence? (ibid.). Heim suggests that this and similar questions take on new significance with the full-flowering of digital media. If truth 'becomes finite and accessible to humans primarily through the word,' he asks, 'what implications do the new media hold for the living word as it shifts into spatial imagery?' (ibid.).

Heim casts his larger discussion of these issues in the context of Avatar worlds being constructed by online users of virtual reality (VR) software to express their visions of virtual reality as a form of truth. These visions are realized and transmitted through what Heim calls the 'new mode of truth.'

2. In the traditional view knowing is an act we carry out on something that already exists, and truth pertains to what already is. In various ways, however, the kind of knowing involved in social practices within the diverse spaces of new ICTs is very different from this. More than propositional knowledge of what already exists, much of the knowing that is involved in the new spaces might better be understood in terms of a performance epistemology - knowing as an ability to perform - in the kind of sense captured by Wittgenstein as: 'I now know how to go on.' This is knowledge of how to make 'moves' in 'language games.' It is the kind of knowledge involved in becoming able to speak a literal language, but also the kind of move-making knowledge that is involved in Wittgenstein's notion of language as in 'language games' (Wittgenstein 1953).

At one level this may be understood in terms of procedures like making and following links when creating and reading Web documents. At another level it is reflected in Lyotard's idea that the kind of knowledge most needed by knowledge workers in computerized societies is the procedural knowledge of languages like telematics and informatics--recalling here that the new ICTs and the leading edge sciences are grounded in language-based developments--as well as of how to interrogate. Of particular importance to 'higher order work' and other forms of performance under current and foreseeable conditions--including performances that gain attention--is knowledge of how to make new moves in a game and how to change the very rules of the game. This directly confronts traditional epistemology that, as concretized in normal science, presupposes stability in the rules of the game as the norm and paradigm shifts as the exception. While the sorts of shifts involved in changing game rules cannot all be on the scale of paradigm shifts, they nonetheless subvert stability as the norm.

3. Standard epistemology is individualistic. Knowing, thinking/cognition, believing, being justified, and so on are seen as located within the individual person (knowing subject). This view is seriously disrupted in postmodernity. Theories of distributed cognition, for example, have grown in conjunction with the emergence of 'fast capitalism' (Gee, Hull and Lankshear 1996) and networked technologies. This is a complex association, the details of which are beyond us here (see also Castells 1996, 1997, 1998). It is worth noting, however, that where knowledge is (seen as) the major factor in adding value and creating wealth, and where knowledge workers are increasingly mobile, it is better for the corporation to ensure that knowledge is distributed rather than concentrated. This protects the corporation against unwanted loss when individuals leave. It is also, of course, symmetrical with the contemporary logic of widely dispersed and flexible production that can make rapid adjustments to changes in markets and trends.

A further aspect of this issue is evident in Lyotard's recognition of the role and significance of multidisciplinary teams in 'imaging new moves or new games' in the quest for extra performance. The model of multidisciplinary teams supersedes that of the expert individual (Lyotard's professor) as the efficient means to making new moves.

In addition, we have seen that in the information-superabundant world of the Internet and other searchable data sources it is often impossible for individuals to manage their own information needs, maintain an eye to the credibility of information items and so on. Practices of information gathering and organizing are often highly customized and dispersed, with 'the individual' depending on roles being played by various services and technologies. Hence, a particular 'assemblage' of knowledge that is brought together--however momentarily--in the product of an individual may more properly be understood as a collective assemblage involving many minds (and machines).

4. To a large extent we may be talking about some kind of post-knowledge epistemology operating in the postmodern condition. In the first place, none of the three logical conditions of justified true belief is necessary for information. All that is required for information is that data be sent from sender to receivers, or that data be received by receivers who are not even necessarily targeted by senders. Information is used and acted on. Belief may follow from using information, although it may not, and belief certainly need not precede the use of information or acting on it.

There is more here. The 'new status' knowledge of Lyotard's postmodern condition--knowledge that is produced to be sold or valorized in a new production--does not necessarily require that the conditions of justified true belief be met. This follows from the shift in the status of knowledge from being a use value to becoming an exchange value. For example, in the new game of 'hired gun' research where deadlines are often 'the day before yesterday' and the 'answer' to the problem may already be presupposed in the larger policies and performativity needs of the funders, the efficacy of the knowledge produced may begin and end with cashing the check (in the case of the producer) and in being able to file a report on time (in the case of the consumer). Belief, justification and truth need not come within a mile of the entire operation.

Even accounts like the one Gilster provides of assembling knowledge from news feeds stops short of truth, for all his emphasis on critical thinking, seeking to avoid bias, distinguishing hard and soft journalism, and so on. The objectives are perspective and balance, and the knowledge assembly process as described by Gilster is much more obviously a matter of a production performance than some unveiling of what already exists. We assemble a point of view, a perspective, an angle on an issue or story. This takes the form of a further production, not a capturing or mirroring of some original state of affairs.

5. So far as performances and productions within the spaces of the Internet are concerned, it is questionable how far 'knowledge' and 'information' are the right metaphors for characterizing much of what we find there. In many spaces where users are seeking some kind of epistemic assent to what they produce, it seems likely that constructs and metaphors from traditional rhetoric or literary theory--e.g., composition--may serve better than traditional approaches to knowledge and information.

Conclusion

To the extent that such perceived challenges to conventional epistemology have force, they carry implications for qualitative research. At the very least, they imply that besides taking into account the standard sorts of considerations associated with developments in poststructuralist, post-colonialist, postmodern and post-positivist theorising, debates around themes like validity, verification, representation and interpretation should also reckon with quite specific dimensions of contemporary change such as those identified here.

Note

This paper draws on material presented in other places, notably:

Lankshear, C., Peters, M. and Knobel, M. (2000) Information, knowledge and learning. The Journal of the Philosophy of Education Society of Great Britain 34, 1.

Lankshear, C. and Knobel, M. (2001) What is digital epistemologies?. In J. Souranta et al (eds.), The Integrated Media Machine: Aspects of Internet Culture, Hypertechnologies and Informal Learning. Helsinki and Rovaniemi: Edita and the University of Lapland.

Acknowledgment

The work reported in this paper was supported financially by the Faculty of Education and Creative Arts, Central Queensland University, and by the Australian Research Council.

References

Brown, J. and Duguid, P. (2000) The Social Life of Information. Boston: Harvard Business School Press.

Castells, M. (1996) The Rise of the Network Society. Oxford: Blackwell.

Castells, M. (1997) The Power of Identity. Oxford: Blackwell.

Castells, M. (1998) End of Millennium. Oxford: Blackwell.

Gee, J. P., Hull, G. and Lankshear, C. (1996) The New Work Order: Behind the Language of the New Capitalism. Sydney: Allen and Unwin.

Gershenfeld, N. (1999) When Things Start to Think. New York: Henry Holt and Company.

Gettier, E. (1963) Is justified true belief knowledge? Analysis, 23: 121-3.

Gilster, P. (1997) Digital Literacy. New York: John Wiley and Sons Inc.

Goldberg, K. (2000) The robot in the garden: telerobotics and telepistemology on the Internet. www.ieor.berkeley.edu/~goldberg/art/tele/index.html (accessed 21 Mar. 2002).

Goldhaber, M. (1997) The attention economy and the net. First Monday. firstmonday.dk/ issues/ issue2_4/goldhaber (accessed 2 Jul. 2000)

Goldhaber, M. (1998a) The attention economy will change everything, Telepolis (Archive 1998). www.heise.de/tp/english/inhalt/te/1419/1.html (accessed 30 Jul. 2000).

Goldhaber, M. (1998b) M.H. Goldhaber's principles of the new economy. www.well.com/ user/mgoldh/principles.html (accessed 2 Jul. 2000).

Heim, M. (1999) Transmogrifications. www.mheim.com/html/transmog/transmog.htm (accessed Mar. 2002).

Johnson, S. (1997) Interface Culture: How New Technology Transforms the Way We Create and Communicate. San Francisco: HarperEdge.

Lyotard, J-F. (1984) The Postmodern Condition: A Report on Knowledge. Trans. Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Lyotard, J-F. (1993) A svelte appendix to the postmodern question. In Political Writings. Trans. Bill Readings and Kevin Paul Geison. Minneapolis: University of Minnesota Press.

Marx, K. (1845-47) Theses on Feuerbach. hegel.marxists.org (accessed 25 Jul. 2002)

Negroponte, N. (1995) Being Digital. New York: Vintage Books.

Tunbridge, N. (1995) The cyberspace cowboy. Australian Personal Computer. December. 2-4.

Turkle, S. (1995) Life on the Screen: Identity in the Age of the Internet. London: Phoenix.

Wittgenstein, L. (1953) Philosophical Investigations. Oxford: Blackwell.

 

back to AERA symposium papers / back to work index / back to main index