Information, knowledge and learning:
Rethinking epistemology for education in a digital
age
Colin
Lankshear
Invited Keynote Address Vth
National Congress of Educational Research,
Aguascalientes, México,
31 October 1999
Introduction
This paper will look at some
challenges that face the epistemological model on which formal
education is based as a result of the rapid growth in use of new
communications and information technologies (or CITs). The argument
has three parts.
The first - which is by far
the longest part - will describe some phenomena associated with the
development during recent decades of these new CITs and their large
scale incorporation into diverse social practices. While much of the
detail will be based in the growth of new technologies in the First
World, it is obvious that the nature and directions of this growth
impact powerfully on trends in countries like México.
The second part of the
argument will briefly describe what I see as the epistemological
model underlying formal education. This is an epistemological model
which is very old and deeply entrenched.
The third part of the
argument considers some of the ways the phenomena described in the
main part of the paper challenge this epistemological model and, to
that extent, educational practices based upon it.
The conclusion to the
argument is that in the current context there is an urgent need for
philosophers of education, along with other educational scholars and
researchers, to consider the implications for epistemology of the new
technologies and social practices of the 'digital age'. Of course, if
we accept the line of argument I offer but do not like the
implications that come from it, we may have to address in a deep and
serious way the issue of the aims and purposes of education, and the
relationship between formal education and the various agendas we see
unfolding in the digital age.
Part 1: Some
phenomena associated with the rise of new CITs
(i)
Knowledge in the postmodern condition
In The Postmodern Condition
Jean-François Lyotard (1984) advances what has proved to be a
highly prescient and compelling account of scientific (as distinct
from narrative) knowledge in so-called 'advanced' societies. His
working hypothesis states that
the status of knowledge is
altered as societies enter what is known as the postindustrial age
and cultures enter what is known as the postmodern age (1984:
3).
Lyotard's analysis of the
postmodern condition is a report on the status of knowledge under the
impact of technological transformation within the context of the
crisis of narratives - especially Enlightenment metanarratives
concerning meaning, truth, and emancipation which have been used to
legitimate both the rules of knowledge in the sciences and the
foundations of modern institutions. His concept of the postmodern
condition describes the state of knowledge and the problem of its
legitimation in the most 'highly developed' countries, in the wake of
'transformations which, since the end of the nineteenth century, have
altered the game rules for science, literature and the arts' (1984:
3; Peters 1995).
By 'transformations' Lyotard
means particularly the effects of new technologies since the 1940s
and their combined impact on the two main functions of knowledge:
namely, research and the transmission of acquired learning. He argues
that the leading sciences and technologies are all grounded in
language-based developments - in theories of linguistics,
cybernetics, informatics, computer languages, telematics, theories of
algebra - and on principles of miniaturization and commercialization.
This is a content in which
knowledge is and
will be produced in order to be sold, and it is and will be consumed
in order to be valorized in a new production: in both cases, the goal
is exchange (1983: 4).
Knowledge, in other words,
'ceases to become an end in itself'; it loses its use value and
becomes, to all intents and purposes, an exchange value alone. The
changed status of knowledge comprises at least the following
additional aspects.
Availability of knowledge as an international commodity
becomes the basis for national and commercial advantage within the
emerging global economy
Computerized uses of knowledge become the basis for
enhanced state security and international monitoring
Anything in the constituted body of knowledge that is
not translatable into quantities of information will be
abandoned
Knowledge is exteriorized with respect to the knower,
and the status of the learner and the teacher is transformed into a
commodity relationship of 'supplier' and 'user'.
Lyotard's critique frames
the central question of legitimation of scientific knowledge in terms
of its functions of research and transmission of learning within
computerized societies where metanarratives face 'incredulity' (1984:
xxiv). In his critique of capitalism Lyotard argues that the state
and company/corporation have found their only credible goal in power.
Science (research) and education (transmission of acquired learning)
as institutionalized activities of state and corporation are/become
legitimated, in de facto terms, through the principle of
performativity: of optimizing the overall performance of social
institutions according to the criterion of efficiency or, as Lyotard
puts it, "the endless optimilization of the cost/benefit
(input/output) ratio" (Lyotard 1993, 25). They are legitimated by
their contribution to maximizing the system's performance, a logic
which becomes self legitimating - that is, enhanced measurable and
demonstrable performance as its own end.
Performativity
in education at all levels calls for our schools and universities to
make "the optimal contribution . . . to the best performativity of
the social system" (Lyotard 1984, 48).
This involves creating the
sorts of skills among learners that are indispensable to maximum
efficiency of the social system which, for societies like our own, is
a system of increasing diversity and players in the marketplace of
global capitalism. Accordingly, two kinds of skills
predominate:
1. Skills
"specifically designed to tackle world [economic] competition," which
will vary "according to which 'specialities' the nation-states or
educational institutions can sell on the world market"
2. Skills
which fulfill the society's "own needs." These have to do with
maintaining the society's "internal cohesion." Under postmodern
conditions, says Lyotard, these cohesion skills displace the old
educational concern for ideals. Education is now about supplying
"the system with players capable of acceptably filling their roles at
the pragmatic posts required by its institutions" (48).
As Marshall (1998)
notes
educational
institutions . . . will be used to change people away from the former
liberal humanist ideals (of knowledge as good in itself, of
emancipation, of social progress) to people who through an organized
stock of professional knowledge will pursue performativity through
increasingly technological devices and scientific managerial theories
(12).
There are several
implications for knowledge and learning in the context of formal
education.
According to Lyotard, 'to the extent that learning is
translatable into computer language and the traditional teacher is
replaceable by memory banks, didactics can be entrusted to machines
linking traditional memory banks (libraries, etc.) and computer data
banks to intelligent terminals placed at the students' disposal'
(1984: 50).
At higher levels of education, instruction by teachers
would be directed to teaching students 'how to use the terminals'
rather than transmitting content. Lyotard identifies two aspects
here: (a) teaching new languages (e.g., informatics, telematics), and
(b) developing refined abilities to handle 'the language game of
interrogation' - particularly, to what information source should the
question be addressed, and how should the question be framed in order
to get the required information most efficiently?
The primary concern of professionally-oriented
students, the state, and education institutions will be with whether
the learning or information is of any use - typically in the sense of
'Is it saleable?' or 'Is it efficient?' - not with whether it is
true.
Competence according to criteria like true/false,
just/unjust have been displaced by the criterion of high
performativity.
Under conditions of less than perfect information the
learner/student/graduate/expert who has knowledge (can use the
terminals effectively in terms of computing language competence and
interrogation) and can access information has an advantage. The
nearer conditions are to perfect information (where data is in
principle accessible to any expert), advantage comes from the ability
to arrange data 'in a new way' - to make a new 'move' in the
knowledge game, or to 'change the rules of the game' - by using
imagination to connect together 'series of data that were previously
held to be independent' (1984: 52). Imagination, in the final
analysis, becomes the basis of extra performativity.
(ii) Dealing
with superabundant information
With the advent of the new
CITs we have entered what Mark Poster (1995) calls the second media
age, or the second age of mass communications in the twentieth
century. The first age, comprising film, radio and television, was
based on a logic of broadcast. Here 'a small number of producers sent
information to a large number of consumers,' transcending earlier
constraints of time and space by initially electrifying analogue
information and, later, by digitizing it. The integration of
satellite technology with telephone, television and computer media
has brought the emergence of a many-to-many logic of communication,
which is Poster's second media age. Boundaries between producers,
distributors and consumers of information break down, and social
relations of communication are radically reconfigured under
conditions of infinitely greater scope for interactive communication
than in the broadcast model (Poster 1995: 3).
There are some interesting
and important contingencies associated with this second media age,
and with the Internet in particular.
First, there is the
now-notorious issue of the sheer volume of available information.
While the phenomenon known as info-glut (Postman 1990: Gilster 1997:
6) or data smog (Shenk 1998) is by no means confined to the Internet
and other information technologies, it certainly reaches an apex
here.
Second, the Net is a
radically 'democratic' inclusive medium where information is to a
large extent unfiltered. Paul Gilster (1997: 38-39) notes that even
with the introduction of cable television, conventional mass media
are nonetheless exclusive. Certain categories of content are excluded
through the filtering decisions and actions of programming executives
and the like. While many information sources on the Internet
(especially on the WWW) filter and otherwise moderate content in
accordance with their perceived interests and purposes, this is in no
way the norm.
Third, a great deal of
information on the Internet is presented persuasively. Gilster (1997: 2-3) notes
that with the tools of electronic publication being dispersed
practically on a global scale, 'the Net is a study in the myriad uses
of rhetoric.' The importance of presentation and the incentives to
present information in maximally compelling ways should not be
underestimated in the context of what Goldhaber (1998) calls 'the
attention economy' (see below).
Gilster (1997: Chapter 7)
describes a practice he calls 'knowledge assembly' which he sees as a
necessary new literacy in and for the information age. He asks how
one builds knowledge out of online searching and caching, and how
specific items of information are to be evaluated. He seeks open,
non-prejudiced inquiry, which strives for balance, goes where the
evidence leads, and aims to get at the heart of the themes or issues
in question.
For Gilster, knowledge
assembly is 'all about building perspective'. It proceeds by way of
'the accretion of unexpected insights' (Gilster 1997: 195, 219).
When it is used properly, says Gilster,
Networked
information possesses unique advantages. It is searchable, so that a
given issue can be dissected with a scalpel's precision, laid open to
reveal its inner workings. It can be customized to reflect our
particular needs. Moreover, its hypertextual nature connects with
other information sources, allowing us to listen to opposing points
of view, and make informed decisions about their validity (ibid:
196).
Knowledge assembly is about
targeting issues and stories using customized new feeds and
evaluating the outcomes. It is the
ability to
collect and evaluate both fact and opinion, ideally without bias.
Knowledge assembly draws evidence from multiple sources, not just the
World Wide Web; it mixes and distinguishes between hard journalism,
editorial opinion, and personal viewpoints. [It] accepts the
assumption that the Internet will become one of the major players in
news delivery … but it also recognizes the continuing power of the
traditional media (ibid: 199)
Gilster describes the tools
and procedures of knowledge assembly using the Internet in terms of a
five step process.
The first step involves
developing a customized personalized electronic news service - a
personal newsfeed. This is done by subscribing to an online news
service and entering keywords that define the topics or issues you
want to receive breaking stories about. The service - often fee
charging, depending on the range of information sources it culls -
then sends you by email or via a web page which can be tailored for
personal use stories on topics of interest as they break. (For more
detailed descriptions of the kinds of services available, see Gilster
1997: 201-208).
The second step augments the
first, (which draws on formal 'published' information, or 'hard
news'). In the second step one subscribes to online newsgroups and
mailing lists that deal with the subject(s) of interest. These offer
the personal viewpoints and opinions of participants on the issues in
question, providing access to what (other) netizens make of the
topic. Some newsgroups make their own newsfeeds available, which
helps with focused searching by subtopics and the like among the
myriad postings that occur across a range of lists on daily and,
even, hourly bases.
In the third step identified
by Gilster one searches the Internet for background information -
e.g., by going to the archives of online newspapers to get a history
of the build up of the story or issue thus far. Gilster also
mentions using search engines to find Internet links to sites
covering key players in the story or issue. These may provide related
stories or other information which helps contextualize the issue or
topic, providing additional breadth, variables and angles.
The fourth step involves
drawing together other helpful Internet news sources, such as radio
archives accessed by software like RealAudio, interactive chat
sessions, video archives and so on. Although the facility should not
be abused, direct email links might also be used to verify or
disconfirm information.
The final step in the
assembly process takes us beyond Internet sources of information and
involves relating the information obtained from networked sources to
non networked sources: such as television, conventional newspapers,
library resources, and so on. This is indispensable to seeking
balance and perspective, since it puts the issue or story being
worked on into a wider context of news and information - including
prioritized contexts (e.g., where newspapers consistently run the
story on page 1, or on page 12)
These steps toward 'filling
the information cache' entail diverse understandings, skills and
procedures - many of which are only acquired through regular use and
'practice'. For example, learning to find one's ways around the
innumerable mailing lists and news groups/discussion lists;
identifying the 'predilections' of different search engines, and
which one to use (and with which other ones) for particular areas or
topics; how to narrow searches down by refining keyword checks; how
to use Boolean logic, and which search engines employ which Boolean
commands and protocols, and so on. Gilster also mentions specific
'tools' of content evaluation that one uses along the way to filling
one's information cache, item by item: for instance, the credentials
of the sources, the probable audience a source pitches at, the likely
reliability of the source, distinctions such as those between
'filtered, edited news' … personal opinion … and propaganda (ibid:
217).
(iii) The
logic of online information searching and its constitutive
effects
Michael Heim (1993) explores
some constraining influences on how we interrogate the world of
information - and, indeed, the world itself - that can be seen as
associated with normalized practices of a digital regime. He focuses
on Boolean search logic, since nowadays to a large and growing extent
we 'interrogate the world through the computer interface' and 'most
computer searches use Boolean logic' (1993: 14-15).
Heim's underlying point is
that to live within the digital regime means that in no time using
Boolean search logic and similar computing strategies becomes 'second
nature' - something we take for granted (1993: 14). He is interested
in how this will 'affect our thought processes and mental life and,
to that extent, how we will be constituted as searchers, thinkers,
and knowers. He builds on two key ideas:
The types of questions we ask shape the possible
answers we get
The ways we search limit what we find in our
searching.
Heim's account of the
relationships between question types and answers, and between search
modes and what our searches turn up, focuses on the operating mode of
the search engine. On the surface it may appear that search engines
have already moved beyond using Boole's tools: the use of AND, NOT,
OR, NEAR, etc., in conjunction with 'key words, buzz words and
thought bits to scan the vast store of knowledge' (1993: 22). Some
search engines now invite us simply to ask them a question or enter a
few words. (The 'initiated', of course, still prefer to work with key
words and Boole.) But beneath the surface of our natural language
questions or phrases the software is still operating pretty much
according to Boole. The point is that all such searching makes use of
logics that presume pre-set, channeled, tunneled searching: pointed
rather than open searching. Invitations from the machine to refine
our search (as when too many data sources are identified) are
invitations to further sharpen/focus 'an already determined will to
find something definite'; to 'construct a narrower and more efficient
thought tunnel; to create still finer funnels to sift and channel
'the onrush of data' (1993: 22-23).
Heim contrasts this kind of
information scan with what he calls 'meditative perusal' - noting
along the way that for some champions of online searching
'meditating' is reduced to reflective efforts to find sharper and
more discriminating key words. Information scanning is pre-conceived,
focused, highly goal-directed, and treats texts as data. The key
values of information scanning are speed, functionality, efficiency
and control. The answers we get from scanning are bounded and
defined: data falling inside the kinds of spaces escribed overlapping
circles in Venn diagrams. We can then use what we get in accordance
with our knowledge purposes.
In contrast to this, Heim
describes 'meditative perusal' as the kind of 'contemplative,
meditative meander along a line of thinking' that we might engage in
by slowly reading a book and keeping 'the peripheral vision of the
mind's eye' open. Here the reader is open to unexpected connections,
meaning and interpretation, options that were taken and others that
were not, authorial hunches, tensions and contradictions, and so on.
This is an approach to knowledge/getting to know (about) something
which privileges intuition, the unexpected, and openness to
'discoveries that overturn the questions we originally came to ask'
and to 'turning up something more important than the discovery we had
originally hoped to make' (1993: 25-26). Insofar as spaces on the
Internet can, like books, be browsed in this mode, doing do will
require us to resist the wider web of values and purposes to which
search logics are recruited or, at the very least, to be and remain
aware of wider options that exist.
(iv)
Economies of information and attention
The superabundance of
information has been linked to the hypothesis of an emerging
attention economy in ways that have important epistemological
implications. The fact that information is in over-saturated supply
is seen as fatal to the coherence of the idea of an information
economy - since 'economics are governed by what is scarce' (Goldhaber
1997). Yet, if people in postindustrial societies will increasingly
live their lives in the spaces of the Internet, these lives will fall
more and more under economic laws organic to this new space. Numerous
writers (e.g., Goldhaber 1989, 1992, 1996; Lanham 1994; Thorngate
1988, 1990) have argued that the basis of the coming new economy will
be attention and not information. Attention, unlike information, is
inherently scarce. But like information it moves through the
Net.
The idea of an attention
economy is premised on the fact that the human capacity to produce
material things outstrips the net capacity to consume the things that
are produced - such are the irrational contingencies of distribution.
In this context, 'material needs at the level of creature comfort are
fairly well satisfied for those in a position to demand them'
(Goldhaber 1997) - the great minority, it should noted, of people at
present. Nonetheless, for this powerful minority, the need for
attention becomes increasingly important, and increasingly the focus
of their productive activity. Hence, the attention
economy:
[T]he energies
set free by the successes of … the money-industrial economy go more
and more in the direction of obtaining attention. And that leads to
growing competition for what is increasingly scarce, which is of
course attention. It sets up an unending scramble, a scramble that
also increases the demands on each of us to pay what scarce attention
we can (Goldhaber 1997).
Within an attention economy,
individuals seek stages - performing spaces - from which they can
perform for the widest/largest possible audiences. Goldhaber observes
that the various spaces of the Internet lend themselves perfectly to
this model. He makes two points of particular relevance to our
concerns here.
First, gaining attention is
indexical to originality. It is difficult, says Goldhaber, to get new
attention 'by repeating exactly what you or someone else has done
before.' Consequently, the attention economy is based on 'endless
originality, or at least attempts at originality.'
Second, Goldhaber argues
that in a full-fledged attention economy the goal is simply to get
enough attention or to get as much as possible. (In part this
argument is predicated on the idea that having someone's full
attention is a means for having them meet one's material needs and
desires.) This becomes the primary motivation for and criterion of
successful performance in cyberspace. Generating information will
principally be concerned either with gaining attention directly, or
with paying what Goldhaber calls 'illusory attention' to others in
order to maintain the degree of interest in the exchange on their
part necessary for gaining their attention.
(v)
Multimodal truth
Since the invention of the
printing press the printed word has been the main carrier of (what is
presented as) truth. Mass schooling has evolved under the regime of
print, and print has more generally 'facilitated the literate
foundation of culture' (Heim 1999). Of course various kinds of images
or graphics have been used in printed texts to help carry truth
(e.g., tables, charts, graphs, photographic plates, illustrations).
However, Web technology merges pictures and print (not to mention
sound) much more intricately and easily than has ever been possible
before. As Heim puts it
The word now
shares Web space with the image, and text appears inextricably tied
to pictures. The pictures are dynamic, animated, and continually
updated. The unprecedented speed and ease of digital production
mounts photographs, movies, and video on the Web. Cyberspace becomes
visualized data, and meaning arrives in spatial as well as in verbal
expressions.
This situation now confronts
the primary focus within classroom-based education on the
linguistic-verbal-textual resources of reading, writing and talk.
Teaching and learning have been seen throughout the history of mass
education as principally linguistic accomplishments (Gunther Kress,
personal communication). Recently, however, teachers and
educationists have become increasingly interested in the role of
visual representations in relation to teaching and learning. 'The
importance of images as an educational medium is beginning to be
realised, as text books, CD ROM, and other educational resources
become increasingly reliant on visual communication as a medium for
dealing with large amounts of complex information' (Kress, personal
communication).
Part 2: The
epistemological model of formal education
I will identify in a broad
sweep some of the key elements of the epistemological model that has
underpinned education throughout the modern-industrial era. We can
then go on to consider how far these elements may be under question
in a digital age where more and more of our time, purposes, and
energies are invested in activities involving new communications and
information technologies.
Throughout the
modern-industrial era of print learning has been based on curriculum
subjects organized as bodies of content which are in turn based on
work done in the disciplines (history, mathematics, natural science,
etc.). The primary object of learning was the content of subjects.
This was based on the premise that what we need to know about the
world in order to function effectively in it, and that is to be
taught in formal education, is discovered through (natural and
social) scientific inquiry. Even the very 'practical' or 'manual'
subjects (cooking, woodwork, etc.) contained a considerable 'theory'
component.
Although it often did not
happen in actual practice, school learning has also been based on the
assumption that, by participating in curriculum subjects derived from
the disciplines, learners would not only acquire the content of
acquired learning but also come to see how this content gets
discovered and justified by experts. In other words, ideally the
learner would not only learn a body of truths - e.g., historical
truths - but would also learn something how historians (physicists,
mathematicians, etc.) arrive at these truths methodologically and how
they are proved and justified. To use a once-common formulation from
Anglo-American educational philosophy, knowledge has both its
literatures (content) and its languages (disciplined procedures), and
successful learning initiates learners into both (cf. Hirst
1974).
The epistemology underlying
this model of learning is basically the standard view of knowledge
which has dominated Western thought since the time of Plato. This is
widely known as the 'justified true belief' model. According to this
epistemology, for A (a person, knower) to know that p (a
proposition)
A must believe that P
P must be true
A must be justified in believing that P
(see, for example,
Scheffler 1965)
This general model allowed
for many variations, for instance in theories of truth
(correspondence, coherence, pragmatist), in theories of reality
(realism, idealism) and so on. But beneath all such variations,
justified true believe has been the epistemological standard for two
millennia, and has been applied (in a more or less particular way) to
school curricular learning.
Part 3: Some
challenges facing this epistemology
The ideas presented in Part
1 pose some serious challenges for this epistemology and for
established educational practices based on it. I will identify very
briefly five challenges.
1. The
standard epistemology constructs knowledge as something that is
carried linguistically and expressed in sentences/propositions and
theories. This is hardly surprising considering that for two
millennia the modes for producing and expressing knowledge have been
oral language and static print. To the extent that images and
graphics of various kinds have been employed in texts their role has
been, literally, to illustrate, summarize, or convey propositional
content.
The multimedia realm of
digital CITs makes possible - indeed, makes normal - the radical
convergence of text, image, and sound in ways that break down the
primacy of propositional linguistic forms of 'truth bearing.' While
many images and sounds that are transmitted and received digitally so
still stand in for propositional information (cf. Kress' notion of
images carrying complex information mentioned above), many do not.
They can behave in epistemologically very different ways from talk
and text - for example, evoking, attacking us sensually, shifting and
evolving constantly, and so on. Meaning and truth arrive in spatial
as well as textual expressions (Heim 1999), and the rhetorical and
normative modes challenge the scientific-propositional on a major
scale.
Michael Heim (1999) offers
an interesting perspective on this in his account of what he calls
'the new mode of truth' that will be realized in the 21st century. He
claims that as a new digital media displaces older forms of typed and
printed word, questions about how truth is 'made present' through
processes that are closer to rituals and iconographies than
propositions and text re-emerge in similar forms to those discussed
by theologians since medieval times. Heim argues that incarnate truth
as the sacred Word is transmitted through a complex of rituals and
images integrated with text-words. In the case of the Catholic
church, for instance:
communal art is
deemed essential to the transmission of the Word as conceived
primarily through spoken and written scriptures. The word on the page
is passed along in a vessel of images, fragrances, songs, and
kinesthetic pressed flesh. Elements like water, salt, and wine
contribute to the communication. Truth is transmitted not only
through spoken and written words but also through a participatory
community that re-enacts its truths through ritual (Heim,
1999).
The issue of how truth is
made present in and through the rituals of the community of
believers-practitioners has been an abiding concern of theologians
for centuries. Is the presence of incarnate truth granted to the
community through ritualized enactment of the sacred word real, or
should it be seen as symbolic or, perhaps, as a kind of virtual
presence? (ibid.). Heim suggests that this and similar questions take
on new significance with the full-flowering of digital media. If
truth 'becomes finite and accessible to humans primarily through the
word,' he asks, 'what implications do the new media hold for the
living word as it shifts into spatial imagery?' (ibid.).
Heim casts his larger
discussion of these issues in the context of Avatar worlds being
constructed by online users of virtual reality (VR) software to
express their visions of virtual reality as a form of truth. These
visions are realized and transmitted through what Heim calls the 'new
mode of truth.'
2. In the
traditional view knowing is an act we carry out on something that
already exists, and truth pertains to what already is. In various
ways, however, the kind of knowing involved in social practices
within the diverse spaces of new CITs is very different from this.
More than propositional knowledge of what already exists, much of the
knowing that is involved in the new spaces might better be understood
in terms of a performance epistemology - knowing as an ability to
perform - in the kind of sense captured by Wittgenstein as: 'I now
know how to go on.' This is knowledge of how to make 'moves' in
'language games.' It is the kind of knowledge involved in becoming
able to speak a literal language, but also the kind of move-making
knowledge that is involved in Wittgenstein's notion of language as in
'language games' (Wittgenstein, 1953).
At one level this may be
understood in terms of procedures like making and following links
when creating and reading Web documents. At another level it is
reflected in Lyotard's idea that the kind of knowledge most needed by
knowledge workers in computerized societies is the procedural
knowledge of languages like telematics and informatics - recalling
here that the new CITs and the leading edge sciences are grounded in
language-based developments - as well as of how to interrogate. Of
particular importance to 'higher order work' and other forms of
performance under current and foreseeable conditions - including
performances that gain attention - is knowledge of how to make new
moves in a game and how to change the very rules of the game. This
directly confronts traditional epistemology that, as concretized in
normal science, presupposes stability in the rules of the game as the
norm and paradigm shifts as the exception. While the sorts of shifts
involved in changing game rules cannot all be on the scale of
paradigm shifts, they nonetheless subvert stability as the norm.
3. Standard
epistemology is individualistic. Knowing, thinking/cognition,
believing, being justified, and so on are seen as located within the
individual person (knowing subject). This view is seriously disrupted
in postmodernity. Theories of distributed cognition, for example,
have grown in conjunction with the emergence of 'fast capitalism'
(Gee, Hull and Lankshear 1997) and networked technologies. This is a
complex association, the details of which are beyond us here (see
also Castells 1996, 1997, 1998). It is worth noting, however, that
where knowledge is (seen as) the major factor in adding value and
creating wealth, and where knowledge workers are increasingly mobile,
it is better for the corporation to ensure that knowledge is
distributed rather than concentrated. This protects the corporation
against unwanted loss when individuals leave. It is also, of course,
symmetrical with the contemporary logic of widely dispersed and
flexible production that can make rapid adjustments to changes in
markets and trends.
A further aspect of this
issue is evident in Lyotard's recognition of the role and
significance of multidisciplinary teams in 'imaging new moves or new
games' in the quest for extra performance. The model of
multidisciplinary teams supersedes that of the expert individual
(Lyotard's professor) as the efficient means to making new
moves.
In addition, we have seen
that in the information-superabundant world of the Internet and other
searchable data sources it is often impossible for individuals to
manage their own information needs, maintain an eye to the
credibility of information items and so on. Practices of information
gathering and organizing are often highly customized and dispersed,
with 'the individual' depending on roles being played by various
services and technologies. Hence, a particular 'assemblage' of
knowledge that is brought together - however momentarily - in the
product of an individual may more properly be understood as a
collective assemblage involving many minds (and machines).
4. To a
large extent we may be talking about some kind of post-knowledge
epistemology operating in the postmodern condition. In the first
place, none of the three logical conditions of justified true belief
is necessary for information. All that is required for information is
that data be sent from sender to receivers, or that data be received
by receivers who are not even necessarily targeted by senders.
Information is used and acted on. Belief may follow from using
information, although it may not, and belief certainly need not
precede the use of information or acting on it.
There is more here. The 'new
status' knowledge of Lyotard's postmodern condition - knowledge that
is produced to be sold or valorized in a new production - does not
necessarily require that the conditions of justified true belief be
met. This follows from the shift in the status of knowledge from
being a use value to becoming an exchange value. For example, in the
new game of 'hired gun' research where deadlines are often 'the day
before yesterday' and the 'answer' to the problem may already be
presupposed in the larger policies and performativity needs of the
funders, the efficacy of the knowledge produced may begin and end
with cashing the check (in the case of the producer) and in being
able to file a report on time (in the case of the consumer). Belief,
justification and truth need not come within a mile of the entire
operation.
Even Gilster's account of
assembling knowledge from news feeds stops short of truth, for all
his emphasis on critical thinking, seeking to avoid bias,
distinguishing hard and soft journalism, and so on. The objectives
are perspective and balance, and the knowledge assembly process as
described by Gilster is much more obviously a matter of a production
performance than some unveiling of what already exists. We assemble a
point of view, a perspective, an angle on an issue or story. This
takes the form of a further production, not a capturing or mirroring
of some original state of affairs.
5. So far
as performances and productions within the spaces of the Internet are
concerned, it is questionable how far 'knowledge' and 'information'
are the right metaphors for characterizing much of what we find
there. In many spaces where users are seeking some kind of epistemic
assent to what they produce, it seems likely that constructs and
metaphors from traditional rhetoric or literary theory - e.g.,
composition - may serve well than traditional approaches to knowledge
and information.
Conclusion
The digital age has thrown
many of our educational practices and the assumptions that underlie
them into doubt. To understand what will be involved in making
informed and principled responses to the conditions of postmodern
life in computerized societies will depend greatly on our willingness
to problematic and rethink long-standing epistemological assumptions
and investments. If this paper does no more than encourage us to
explore this claim further it will have done its job.
At the same time, the fact
that our established epistemological ideals and values are disturbed
by trends, directions, patterns, practices, and other phenomena
associated with the digital age and its new CITs might encourage us
to ask questions about the digital age itself, and about the role and
purposes of education and the relationship between education and
global directions being pushed from 'familiar centers of hegemonic
power,' and not simply to ask questions about what 'digital
epistemology' might look like.
For example, Lyotard's
argument about knowledge in the postmodern condition applies to
scientific knowledge. It leaves untouched the domain of narrative
knowledge which has never been accountable to the epistemology of
justified true belief. Moreover, it has been largely excluded from
practices of formal education. To redefine the role and purposes of
education away from the values of scientific knowledge, and away from
logics of exchange, performativity, and the like, might be to
emphasize the role and significance of narrative knowledge for
education (cf. Esteva and Prakash 1998; Prakash and Esteva
1998).
Alternatively, to retain
strong commitment to the traditional epistemology associated with
formal education throughout modernity might require us to emphasize
within formal education the kinds of practices using new technologies
that accommodate their uses to pursuit of justified true belief
(e.g., Thagard 1997).
Then again, we may decide
that for formal education to prepare learners appropriately for the
world they will enter, it is necessary to acknowledge multiple,
hybrid, or eclectic epistemologies epistemologies - reflecting
combinations of the sorts of ideas and trends discussed here: for
example, aspects of a propositional epistemology operating in
conjunction with aspects of a performance or 'compositional' or
'rhetorical' epistemology, together with multiple language games of
'truth' and the like.
There is much to think about
and many options to negotiate. The only option, perhaps, that is not
reasonably open to us is to stand still.
Note
An expanded and otherwise
modified version of this paper has been published as: Lankshear, C.,
Peters, M. and Knobel, M. (2000). Information, knowledge and
learning: Some issues facing epistemology and education in a digital
age. Journal of
Philosophy of Education. Special Issue: Enquiries at the Interface:
Philosophical Problems of Online Education, eds. Nigel Blake and Paul
Standish, vol 34, issue 1, feb 2000: 17-40
References
Castells, M. (1996).
The Rise of the Network
Society. Oxford:
Blackwell.
Castells, M. (1997).
The Power of
Identity. Oxford:
Blackwell.
Castells, M. (1998).
End of
Millennium. Oxford:
Blackwell.
Gee, J. P., Hull, G. and
Lankshear, C. (1997). The New Work Order: Behind the Language of the New
Capitalism. Boulder,
CO.: Westview Press.
Gilster, P. (1997).
Digital
Literacy. New York:
John Wiley and Sons, Inc.
Goldhaber, M. (1997). The
attention economy and the net. At http://firstmonday.dk/issues/issue2_4/goldhaber/
Heim, M. (1993).
The Metaphysics of
Virtual Reality. New
York: Oxford University Press.
Heim, M. (1999).
Transmogrification. At http://www.mheim.com/html/transmog/transmog.htm
Hirst, P. (1974).
Knowledge and the
Curriculum. London:
Routledge and Kegan Paul.
Lanham, R. (1994). The
economics of attention. Proceedings of 124th Annual Meeting,
Association of Research Librarians. Austin, Texas. Formerly at:
http://sunsite.berkeley.edu/ARL/
Proceedings/124/ps2econ.html
Lyotard, J-F. (1984).
The Postmodern
Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian
Massumi. Foreword by Fredric Jameson. Minneapolis: University of
Minnesota Press.
Lyotard, J-F. (1993). A
Svelte appendix to the postmodern question. In Political Writings. Translated by Bill Readings and Kevin
Paul Geison. Minneapolis: University of Minnesota Press.
Marshall, J. (1998).
Performativity: Lyotard, Foucault and Austin. Paper delivered to the
American Educational Research Association's Annual Meeting. 11-17
April. San Diego.
Peters, M. A. (1995) (Ed.)
Education and the
Postmodern Condition,
Westport, Conn. & London, Bergin & Garvey.
Poster, M. (1993).
The Mode of
Information. Chicago:
University of Chicago Press.
Poster, M. (1995).
The Second Media
Age. Cambridge, Mass.:
Polity Press.
Postman, N. (1993).
Technopoly: The
surrender of Culture to Technology. New York: Vintage Books.
Scheffler, I. (1965).
Conditions of
Knowledge. Chicago:
Scott, Foresman.
Shenk, D. (1998).
Data Smog: Surviving the
Information Glut. New
York: HarperCollins.
Thagard, P. (1997). Internet
epistemology: Contributions of new information technologies to
scientific research. Formerly at http://cogsci.uwaterloo.ca/Articles/Pages/Epistemplogy.html
Thorngate, W. (1988). On
paying attention. In W. Baker, L. Mos, H. VanRappard and H. Stam
(eds.), Recent Trends in
Theoretical Psychology.
New York: Springer-Verley. 247-164.
Thorngate, W. (1990). The
economy of attention and the development of psychology.
Canadian
Psychology 31,
262-71.
back to text
index/back to main
index/to book
index