Rants, Ratings and Representation: Issues of ethics, validity and reliability in researching online social practices

 

Michele Knobel

Draft Paper presented at the Annual Meeting of The American Educational Research Association. New Orleans, 3 April 2002

Introduction

The past 10 years have witnessed a growing debate particularly within the social sciences--over what constitutes ethical research practice where cyberspaces and human interaction are concerned. Some researchers argue that codes of ethical conduct currently used in physical spaces--principally those endorsed by universities and influential professional associations (e.g., the American Psychological Association)--hold equally and immutably for cyberspaces. In other words, these researchers argue there is no difference between investigating human interaction and subjectivity offline or online. Others have argued for a more situated or negotiated approach to ethical research practice. For them, researching communities and practices on the internet requires new approaches to ethical conduct because what holds in physical space--or, meatspace--hardly ever translates directly into cyberspace, and may even hinder "good" research because, for example, the insistence on informed consent from participants in the study may actually irreparably disrupt an online community or series of interactions, or because assurances of participants' anonymity in research reports are deeply problematic in the archived and searchable network of cyberspace.

This paper engages primarily with issues concerning ethical research conduct when investigating online practices, with reference to studies whose data is drawn solely from cyberspaces, as well as to studies that include an online investigation component (e.g., studies of what children do when participating in online social spaces outside school, compared with what they do in class in terms of internet and computer use). The aim of the paper is no to pronounce on what is right and wrong in terms of researcher actions, but instead to open up for discussion and reflection the complexities associated with conducting ethical research online. The latter part of the paper outlines key challenges to conventional approaches to ascertaining communicative validity and trustworthiness in qualitative studies that draw on or include investigations of cyberspaces. The paper closes with a closer look at some of the complexities attending online research within the field of education and suggests three maxims for guiding ethically informed, communicatively valid and trustworthy research conducted in online (and offline) spaces.

Online research and ethical activity

Online research usually refers to two kinds of activity. The first is the analysis of online, public documents (such as newspapers, journals, letters, policies, books, etc.) which are treated as texts in the sense of texts used in a theoretical or historical study (cf., Knobel and Lankshear 1999), or to help to locate the study theoretically, historically, politically or socially in ways that do not require the authorís anonymity to be preserved. Second, online research can refer to the study of inter-networked cyberspaces. It is this second sense that is taken up in this paper. Where researchers working within the field of education are concerned, this activity generally focusses on person-to-person interactions and communications on and over the internet, and includes the study of websites, online chatspaces, instant messaging uses, email discussion lists or messages, archived discussions and other types of person-to-person exchanges. Online research can be conducted entirely within cyberspace; that is, the entire corpus of data is downloaded from the internet. Online research can also form one component of a study that straddles physical space and cyberspace. In general, online research does not include investigating non-networked computing activities such as creating PowerPoint presentations or wordprocessing habits. These activities fall under conventional meatspace guidelines for ethical research because they generally involve more tangible observation and physical presence within physical settings.

In research circles, ethics is defined as the study of the general nature of the formal or informal system of principles or standards that guide what is considerated to be ìrightî or ìproperî conduct, and the decision-making processes involved in the moral choices people make. Morality refers to the actual application of a system of principles or standards to conduct in a range of contexts and in ways that are, or can be, judged in terms of conformity or noncomformity to a generally accepted standard for or rule of conduct. In short, ethics is the study of moral activity. As already mentioned, some researchers argue that where ethics and online research are concerned, nothing has changed: the standards and principles that apply to physical space research apply equally to research in cyberspace. Others argue that ethical issues encountered in meatspace research are often amplified in cyberspace and require even more careful attention to the moral consequences of gathering data from online interactions. For example, the Association of Internet Researchers (AOIR 2001: 1) identifies the following differences between online and offline research. In online research, there is:

This list of difficulties is not exhaustive, neither would all researchers agree with what has been included in it. Nevertheless, these differences signal important interfaces or points where problems can arise.

Despite the proliferation of guidelines for conducting ethical research, much of the existing ethical commentary concerning online research wrestles principally with three issues: (1) the distinction between public and private spaces; (2) obtaining informed consent from study participants; and (3) the assurance of participants' anonymity in research publications. A focus on the research context, informed and willing participation, and reporting issues, however, risks suggesting that once these things have been taken care of--along with every researcherís duty to respect those participating in the study and to treat and present them as dignified beings--then ethical considerations have been well satisfied. However, within education in particular, conducting ethically informed online research is a complex process. One of the key difficulties confronting researchers working within the field of education and who engage with online research is the direct and indirect involvement of a wide range of people who have vested interests in each study conducted. Regardless of whether the study is concerned directly with education or aims at informing education by examining what people are doing technology-wise outside schools, education researchers usually need to take into account in their research planning, conduct and write-up their fellow researcher-educators, other educators, other researchers, policy makers, students and parents who are either involved in research or who will be impacted by the outcomes of a study, graduate students who are "apprenticed" the researcher, government bodies, and other interested people. Meeting the research needs of each of these parties--particularly those who are funding or directly supporting the research in question--can tempt education researchers into quantitative online studies that compute the amount of time people spend online, or that measure the effects of computer use by calculating coefficient variations between pre-test and post-test scores on school subject content matter, all of which places ethical considerations on familiar ground in terms of meeting standard university or other associationsí ethical research guidelines (e.g., respondent anonymity, informed consent from students and parents where minors are concerned, duty of care and beneficence). However, as more and more educators become interested in what young and not-so-young people are doing with computers and the internet in school and out-of-school, we need to begin engaging with more complex considerations of what ìcountsî as ethical research where cyberspaces are concerned.

The moral consequences of what we do research-wise in cyberspace

In 1997, Colin Lankshear and I were invited to present a paper at The Australian Association for Research in Education Annual Conference, Brisbane, on the moral consequences of what we construct throughout the course of each research project (Lankshear and Knobel 1997). In this paper we defined moral consequences in terms of those effects or outcomes for the good or harm of human beings within areas of human activity where people can reasonably be assigned rights and obligations (cf., Thomas 1996a, Warnock 1970). In our argument, we identified what we called ìbearers of moral consequencesîóthat is, those people (and sometimes things, such as policy decisions, study outcomes and recommendations, or education programs) that need to be considered in any ethical/moral stock-take.

Colin and I also distinguished between different "points" of moral consequence within qualitative research studies in general and that were set against a backdrop of higher education in Australia, and marked by the amalgamation of teacher education and university degree instruction and research and an unevenness of research experience that came with it; an explosion of graduate and postgraduate degree openings as part of a drive to make Australia internationally competitive; a commonwealth competitive grants scheme tied to student enrollments and research degree completions; a national push towards a "culture of consultancy" in education; and increasing teaching loads for lecturer-researchers along with intensified administrative demands. We defined these points of moral consequence in terms of (gross) stages or phases within the processes and acts of doing research where what we do, or omit doing, and that generate some sort of consequence people involved directly or indirectly in the study. In reality, "points" comprise practically every moment research is "going on", but for purposes of heuristic convenience we distinguished broadly between "front end", "in process", and "back end" points of research conduct and responsibility, which roughly correspond to planning, implementation, and end-of-project dissemination phases.

This same heuristic device proves useful for examining ethically informed approaches to online research that go beyond differentiating between public and private spaces on the internet, engaging with obtaining informed consent and ensuring participant anonymity in reports.

The researched

1. Front end concerns

Commentaries on ethical action within online research tend in large part to focus on the participants in a study. Researcher treatment of participants--or those who are "researched"--is discussed principally in terms of "doing no harm", "beneficence" or "nonmalificence" (see, for example AOIR 2001, Johnson 2001). However, the front-end concerns associated with participants begin long before ethical responsible selection criteria and obtaining consent becomes an issue, and include demonstrating respect for others online by participating in the community to be studied for an extended period of time prior to the start of formal data collection. And, as an aside, spending a good deal of time observing and/or participating in an online community or webspace alerts the researcher to the stability of the community of site in terms of if being active and accessible long enough to collect good quality and sufficient data.

The easy access to online communities afforded by the internet makes it tempting to practice hit-and-run research, where the researcher spends a few days or even a few hours observing the interactions of online participants in a given community, then writes about these as though everything to be known about the community has been observed and understood in that short period of time. This kind of snatch-and-grab approach usually provokes scathing comments from the participants themselves, as witnessed not so long ago with the publication of Douglas Rushkoff's book, Cyberia: Life in the Trenches of Hyperspace (Rushkoff 1994). The book purported to be a study of life online and written by a supposedly long-term participant. Rushkoff was roundly criticised in a series of very public comments and flames (i.e., vitriolic monologues that critique a position, action or person) for under-researching his topic and for claiming to be an ìinsiderî when he was not (cf., eleven 2001: 1).

Rushkoff countered such criticisms by claiming they were more a case of "sour grapes" than anything else: "A lot of people in San Francisco [have] hated me since 1993 when Cyberia first came out, because it was a book that didnít include most of them," he argues (Rushkoff in interview with Rust 1997: 1). Nonetheless, his credibility as a spokesperson for online communities remains severely compromised in many quarters and the reaction of cyber-citizens to his book sounds a warning bell to would-be researchers of online communities and interactions.

Despite the easy access to data afforded to researchers by online interactions, ethical practice in relation to obtaining informed consent still holds, even if argued over by researchers themselves. Some researchers of online practices insist that obtaining informed consent from participants is an inalienable researcher responsibilityóand if consent cannot be obtained readily, then the researcher should either change the studyís design, or abandon the project altogether (e.g., Bruckman 2001). Indeed, some online researchers set themselves very specific rules for obtaining consent. Amy Bruckman, for example, proposes that consent can by given via email if the participant is over 18, but that signed parental consent needs to be mailed or faxed to the researcher if participants are less than 18 years old. However, her position suggests it is possible to ascertain beyond a shadow of a doubt that the person targetted as a participant in a study is indeed aged 18 years or more. The Internet is, of course, rife with children masquerading or avatar-ing as adults (and vice versa).

In most cases, arguments over whether informed consent should or should not be obtained from participants in an online community boils down to arguments over which online spaces are public and which are private. Within the Humanities, most people agree that research conducted within public spaces (e.g., parks, shopping centres, in the street) does not require the researcher to obtain informed consent from participants (cf., Goffman 1963, 1974). However, few researchers of online practices appear to agree on what criteria should be brought to bear on a space in order to judge it ìpublicî or ìprivateî. Some argue that the public-ness or private-ness of an online space should be judged according to how it is perceived by the people who interact within it. Allison Cavanagh, for example, points out that public space metaphors abound online--such as, "village", "cafe", "town halls", "town squares"--and indicate the non-private status of these different spaces (1999: 1). Cavanagh also points out that "lurkers" or non-contributors to online interactions are tolerated, if not expected or assumed, in online communities or discussion groups. She observes that when lurkers change their status to more active participation, they are generally welcomed warmly by the community or group. Cavanagh attributes this to a shared cultural assumption about life online that "internet interactions occur within a public arena and are therefore matters for public consumption" (p. 3). Indeed, on the eBay discussion lists for example, new posters often are berated for not spending some time lurking and getting a feel for what has already been discussed and what advice has already been offered in response to other people's eBay-related problems. In addition, many postersí messages to the eBay feedback discussion board are clearly directed to a wide and anonymous audience, and include, for example, unsolicited open letters to newcomers about how to participate effectively in eBay transactions, general calls for comments or advice on a problem encountered within a transaction, and so on.

Other researchers, however, call for the physical nature of the space to be taken into account when judging whether an interaction is public or private (Frankel and Siang 1999). For example, password protected communities--such as some online cafes and salons--are generally assumed to be private spaces. On the other hand, archivable discussions--such as those generated on web-based discussion boards--are generally presumed to be public spaces (cf. AOIR 2001). However, as Rushkoff found out, these distinctions do not always hold, and it is the responsibility of the researcher to make reasoned judgements concerning the nature of the space. This usually always involves participating in an online community prior to the start of a formal study so that the researcher can ascertain what kind of community--public or private, or a mix of both--members assume the space to be and act accordingly.

Some researchers suggest the best response to the public-private dilemma is to create purpose-built research spaces online, such as a room in a MOO or email discussion list, which is established explicitly for collecting interactional data, with the purpose of the room written into itís publicly available description (Bruckman 2001). Other strategies include setting up websites and the like that signal the researcherís status, The ease of covert observation and data collection brings with it a range of ethical responsibilities not always found in meatspace research contexts. These include thinking through and even pre-planning the public identity the researcher will project onto the online space to be studied and how this identity will be communicated to others (e.g., through careful choice of an alias, a judiciously worded character description, an avatar that carries a magnifying glass and notebook). Public declarationóas far as is possible within the research context and not always an easy undertaking even in meatspaceóof oneís role as a researcher of online practice is important. This includes establishing means for participants and non-participants alike to contact the researcher about his or her research work. This kind of openness contributes to a researcher's credibility as someone with nothing to hide from study participants.

For example, in my ongoing study of eBay, I have used the "aBout Me" webpage function on eBay to alert other eBay users to my role as researcher and explicitly to invite people to be interviewed as part of my investigation of community practices on eBay. eBay provides each registered user with a "Me" website where she can write about her hobbies, interests or anything else that comes to mind that she believes others will find interesting or useful. This page is accessed by clicking on the "Me" icon appearing beside registered users eBay aliases.

In my case, my "Me" (<http://members.ebay.com/aboutme/netgrrrl/>) page advertises my role as a researcher of eBay practices and interactions. The downside to this kind of revelation work is that I have no way of judging who visits my page. I have a hunch that not many do as a year after putting this page in place I have yet to be contacted by someone offering to be interviewed. In addition to this webpage, I actively participated in eBay auctions for some time prior to collecting data, and built a credible online reputation by means of the transactions I was involved in and the positive ratings I received from sellers.

In terms of front-end concerns that pertain to those being studied in any given project, and as with offline research conducted within social contexts, many participants in the study will come with the territory being studied. However, and again as with offline studies, the researcher needs to be aware of the social dynamics in which people to be targetted for interviews are located and what role or roles they usually play within a community. For example, if a researcher only interviews or studies the talk of newcomers, the insights offered into the community will not be as "experienced" or as historically informed as insights garnered from long-term members. Moreover, selecting a trouble-maker or "troll" as a key participant in a study can skew the researcherís interpretations in unjustifiable ways. In short, prior to formal data collection, the researcher is served well by spending a substantial amount of time getting a ìfeelî for the kinds of interactions that take place within the targetted online community, who the regular participants are, what some of the contentious issues are for community members, who the trouble-makers are and who they tend to target (and why, if possible), and the like, which in turn enable the researcher to treat all participants with the respect to which they have a right, to conduct himself or herself as an informed and non-threatening member of the community once formal data collection starts, and to build into the study right from the start measures for obtaining balanced insights into the communities interactions and practices.

2. In-process concerns

Obtaining consent, declaring oneself a researcher and spending time in an online community prior to formal data collection are only the start of a researcherís ethical responsibilities where those who are being researched are concerned. Once formal data collection has begun, the researcher must continue to maintain participants' confidence in the project and trust in the researcher herself. This includes maintaining a consistent online persona, not flaming other members for something they did, avoiding long-winded rants, and generally paying attention to the social needs of others by not being overly intrusive or persistent in asking questions or even always being online.

The relative physical anonymity of online spaces makes it all the more important for a researcher to use only one identity within a researched space. Even within public spaces such as eBay discussion lists, or the Plastic.com news and participant commentary website, posters who use more than one online alias or username are always criticised and suspected of deeper duplicities, regardless of their reasons for doing so (e.g., wanting to use one alias to post a certain kind of message, to avoid receiving personal emails from others or becoming a target for negative ratings). A researcher who interviews under one alias, but participates within discussions under another not only interferes needlessly with the data to be collected, but risks publicly alienating others should they discover the deceit. Demonstrating respect for participants by practising restraint online is a key element in ethical research behaviour. In meatspace, ethical self-monitoring requires classroom researchers to avoid interjections while observing teachers and students; likewise, online researchers need to take care that the seeming anonymity the internet generates does not lead them into dominating an interactional space, nor chastising participants, nor taking offense at something said (within reason, of course. Cases where researchers have been forced to intervene in hurtful activities or identity thefts taking place in chat spaces are well-documented. See, for example, Dibell 1998, Turkle 1997).

Employing a reciprocity factor in online studies is another way of demonstrating ongoing respect and of minimising accusations that only the researcher will benefit from the study. The kinds of reciprocity that online researchers can offer study participants includes helping them with some online task such as writing "bot" programs (e.g., a small program that acts as butler in a MOO room, welcoming people as they enter the door), helping solve HTML dilemmas encountered in setting up a personal website, offering lists of URLs for relevant information on a topic of issue needed by a participant, and suchlike. In Colin Lankshear's and my eBay research, we only interview people we have met either face to face or that we have bought something from. This line of approach could be regarded as problematic by some researchers in that it limits the scope of our interviews, or that having purchased something from someone risks them feeling obliged to respond to interview questions via email. However, researchers do not have an inalienable right to expect people to want to be researched for nothing in return. In many ways, the reciprocity factor reminds the researcher to appreciate the time and effort outlaid by the participant in responding to questions, agreeing to be observed while using a computer or the internet, and suchlike.

3. Back-end concerns

One representational issue particular to cyberspace is the researcher's commitment to anonymity. Indeed, the very ease of access to data on the internet also makes it possible for readers to locate much of the data used in a study for themselves, effectively blowing any pseudonym cover the researcher may have attempted for participants within published reports. Researchers studying archived data (e.g., web-based and email-based discussion lists) or websites cannot ensure anonymity for participants. Some researchers rightly point out that using pseudonyms for participants with well-established online identities actually interferes with the integrity of the study because it removes an important data layer concerning the online alias people choose to use and the identities they craft via these aliases (cf., accounts in Cavanagh 1996, Frankel and Siang 1999). Other researchers argue that aliases are part and parcel of a "consciously 'public' performance for others" (AOIR 2001: 1) in which users participate willingly and openly, and thus cannot be subjected to the same pseudonym rules as apply in meatspace (although pseudonyms are no guarantee of anonymity for participants in meatspace either; see Lankshear and Knobel 1997). Still others problematise the issue of aliases and pseudonyms even further by pointing out that "much of the conversation analyzed in these [online] contexts involves references to others' pseudonyms--and thereby their character, behaviors, etc. Hence to change nicknames or pseudonyms would dilute--if not render unintelligible--the meaning of specific exchanges" (AOIR 2001: 1). And of course, the danger with replacing an alias with a pseudonym is that the pseudonym could prove to be the alias of someone else in another space--or even the same space--which makes for untold confusion and possible embarrassment. My own approach to this issue is to weigh up the extent to which readers of texts about my research can readily access the data I draw on in my accounts when deciding whether or not to use pseudonyms in reporting online interactions. In cases where hiding the participant's identity is close to impossible, I advise the participant of this and obtain their consent to use their "real" alias. In other cases, I either ask participants to nominate a pseudonym for themselves, or I invent one that is as close in nature to the original as possible--always running Internet checks to see if the alias is already in use by someone else.

Regardless of personal feelings, researchers are duty bound to represent study participants fairly, respectfully and with dignity. Many representation concerns from meatspace research transfer directly into cyberspace. This includes decisions concerning whether or not to edit participant-generated texts copied from emails, discussion boards, websites and the like; minimising normative evaluations of an event, practice, or person; if actively participating in the online community being studied, then remaining committed to it long after the study has been completed; producing a text worth reading as a sign of respect for the time and data participants gave to your study; drawing logical, informed, and well-argued conclusions from the data; and so on.

Fairly and respectfully representing study participants can be difficult at times because cyberspace is not always an harmonious social sphere, and often the most intriguing or culturally revelatory events are those where the ugly underbelly of being human is exposed to a viewing public (cf., accounts in Dery 1995, Dibell 1998). Many online researchers point out that this obligation extends to researching hate websites or hate speech--websites or discussion lists devoted to usually fascist commentaries on the supremacy of one race over another, or one set of beliefs and/or values over another (e.g., anti-gay websites, websites belonging to white supremacist groups). For example, Bruckman advises: "You can respond to hate speech or other undesirable behavior online as a netizen or as a journalist, and there are few restrictions on your ethical conduct--email their website manager, publish letters decrying their behaviour, do whatever you can. But as soon as you put on your researcher hat, you owe them the same treatment you do any other subject" (Bruckman 2001: 3). Of course, the danger with this approach is that "research on specific behaviors (pornography, hate speech, etc.) may work to legitimate those behaviors. That is, if re-presented carelessly in research, these behaviors may be 'packaged' in such a way (e.g., through the neutral, ostensibly objective language of social science) as to make them seem more acceptable for the broader society" (original emphases; Elgesem, cited in AOIR 2001: 1). Increased access to a wide range of morally problematic activity online means researchers need to pay careful attention to issues concerning the representation of participants' and their interactions online. One proposed solution to this dilemma is to structure the study in such a way that the linguistic choices made by participants, the interactional rituals they enact, or the cultural meanings they share via their language use become the focus of the study, rather than the actual content of the website or discussion list per se.

My own position on this is that demonstrating respect for others requires the researcher to represent each major participant as fully dimensional as possible. In other words, in any defensible study, the research needs to describe the complexities that make up the online identities of key participants and which locates them within a complex web or context of enacting a particular identity online (and usually offline, as well). In other words, the researcherís role of always asking, "What's going on here, and why?" remains intact in online research. Representation as an ethical concern is not something that is attended to once data have been collected and the time has come to write up interpretations, but need to be considered right from the start of planning the study so that the right kinds of data are collected--such as detailed character descriptions, detailed context descriptions, and so on.

In addition to considering ethical responses to front-end, in-process and back-end points of concern where participants are concerned, online researchers also have duty of care responsibilities towards other researchers and to their own academic field of endeavour.

The researcher and her craft

1. Front-end concerns

Online researchers also have ethical responsibilities that relate directly to the practice of research within their field. Within sociology, for example, researchers are clearly concerned about the reputation of the discipline itself and work hard at ensuring sufficient guidelines for conducting online research are available to sociologists (and others). However, establishing guidelines is only the first step.

What is often overlooked where ethics and online research are concerned is the exclusionary mature of the medium itself. Regular and sustained physical access to computers and the Internet of the kind that enables medium- to long-term participation in web-based activity remains generally confined to the middle and upper classes throughout the world (Pastore 2001, Victory and Cooper 2002). When ethnicity is taken into account, the marginalizing properties of the Internet become even more pronounced. In September 2001 in the US, for example, 71.2 percent of Asian Americans and 70 percent of non-Hispanic Whites were found to have ready access to computers at home, while only 55.7 percent of African Americans and 48.8 percent of Hispanics had similar access (Victory and Cooper: 21). This same national study found that ìInternet use [at home] among Whites, Asian American [sic] and Pacific Islanders hovered around 60 percent, while Internet use rates for Blacks (39.8 percent) and Hispanics (31.6 percent) trailed behindî (ibid.). Recent income statistics released by the US government indicate that the median income for Hispanic households is currently $30,439 per annum and for African American households is $33,447 per annum, while non non-Hispanic White households and Asian households have an annual median income of $45,904 and $55,521 respectively (Bush 2002). Outside the US the differences between those who can afford to access and use computers and the Internet on a regular basis and those who cannot is even more marked (cf., Warschauer 2002).

Although marginalized groups are more and more making effective use of community-based computing centres and facilities and shared neighborhood computing resources, they remain marginalized on the Internet and in online research. This throws into question whether online research can ever be ethical when already marginalized groups are automatically excluded from participating and research consideration (Steinberg 2002). To complicate matters, the physical "markers" of ethnicity can generally be hidden or invented online if a person chooses to, making it difficult--if not impossible, or at the very least highly complicated--for researchers to assign ethnicities to participants with any credibility. There are no easy answers where marginalized groups and the Internet are concerned. One possible, albeit limited, response researchers can make is to draw overt attention to the inequities inherent in online research in their published work (Steinberg 2002).

In terms of the researcher's craft itself, one key, but often overlooked, element in maintaining the reputation of researcher craft--the act of carefully planning, carrying out and disseminating research--and of conducting ethically informed and responsible research is the importance of a well-designed study. A well-designed study is one that is grounded in a meaningful problem of some kind, is framed by a well-formed and manageable research question and a workable theory or set of theories, has carefully selected data collection and analysis tools and techniques that will produce the kind of data and outcomes needed for addressing the research questions, and that is written up in a timely manner (cf., Knobel and Lankshear 1999). A well-planned study indicates in advance the time frame to which participants will need to commit upon agreeing to take part in the study, the extent to which participants will be required to contribute data, and will signal what kind of data will be expected from participants (e.g., two email interviews over a period of four weeks, a participant's set of postings to a discussion list over the period of 6 months), and so on.

A poorly planned study will appear ad hoc to participants and may even undermine their confidence in the researcher as someone who knows what she is doing, with subsequent poor reflections onto the institution or area in which the researcher works. Participants may feel put out if the researcher changes her mind and instead of conducting the one interview the participant agreed to, asks for responses to five or so different sets of questions at five different times. Collecting gigabytes of data from people without a clear plan in advance how the data will be analysed and written up simply wastes peopleís time, and makes them loathe to participate in future research (regardless of who is conducting it).

Even specific tools and techniques for collecting online data come with a range of ethical issues. For example, one popular method for keeping tabs on the websites children visit at home or school is tracking software, which records the URLs visited, the order in which they were visited, and even the amount of time each web page was up on the screen. This kind of software has enormous implications where a researcherís duty of care towards children and childrenís rights to privacy are concerned. Although some schools make use of such software to surveill improper uses of the Internet, this does not necessarily make this software a good thing, nor does it not mean that researchers have a right to make use of the data such software generates, or to employ such software elsewhere.

Research in schools has regularly been a victim of poorly planned projects, with many teachers feeling "researched out" by participating in studies that have dragged on for longer than expected or have fizzled out altogether. Research in cyberspace that aims at informing education with insights gained from observing online textual practices and interactions particularly needs to be carefully thought through and rigorously planned in order to avoid similar problems within online communities.

2. In-process concerns

Data on the internet can be as ephemeral as it is abundant. One of the first things the researcher needs to plan carefully is how online data is to be collected and stored because no-one can guarantee that the data will remain in place, even for a short time. Some websites expressly forbid webpages to be copied or saved without a prior agreement from the owners of the website, and I have found that reading a website's user agreement where one exists is a good guide to what the owners of the website consider to be ethical and responsible action with respect to the data contained there. If a researcher plans on downloading enormous amounts of online data, then suitable storage devices need to be in place prior to the study. This often includes CD-ROM or DVD writer components, high density storage disks (e.g., zip or jazz disks), high data capacity computer harddrives, compression software, and so on.

The online researcher needs to practice her craft carefully throughout the study--keeping meticulous notes about what transpires, collecting representative or key artifacts, following up on leads outside the immediate research context via URLs posted by others or references to other websites or discussion lists made in people's online conversations, and the like. This process is made even more complex when the researcher is investigating online and offline activity simultaneously (e.g., focussing on what young children at school and at home do while sitting at the computer and accessing this website and that). Collecting data about physical and cyberspace space interactions requires the researcher early on in the process to develop a cross-referencing and data management system that enables her to match up relevant, downloaded data with data collected manually in the field in order to remain methodical and organised in her approach to the study. Without a data management and retrieval system in place, researchers can easily lose track of data (if not lose it altogether), or fragment the data set so that patterns and hunches that can be followed up on during the research process do not become evident until long after the data collection phase has terminated, and so on.

Researchers of cyberspace also need to keep alert to complaints that participants and non-participants alike make about the research process, as well as practice regular reflection on and evaluation of the research process itself. The newness of internet interactional spaces does not mean that people cannot become jaded with having a researcher regularly participating in their conversations, or feeling constantly under surveillance. Indeed, this message was writ large in series of message board postings I witnessed on the eBay feedback discussion list about a man (self-described) who regularly declared he was writing a book about the eBay community, but who repeatedly posted long and rambling responses to calls for help from users and presented himself as an expert on all eBay matters. Participants put up with this for some time before exploding into scathing calls for him to hurry up and finish his book so that he would then leave the list and everyone alone.

3. Back-end concerns

Researching online interaction and activity brings with it particular issues concerning the validity or credibility of interpretations and the trustworthiness of the project over all. It is generally well-accepted in research circles that qualitative-type research projects attend to verification criteria other than traditional, quantitative processes of ensuring the reliability and validity of a study. These criteria centre on the communicative validity and the trustworthiness of the study (Kincheloe and McLaren 1998, Knobel and Lankshear 2001). Communicative validity is concerned with judging soundness of the overall argument put forward in research reports (Carspecken 1996: 59).

There are a number of well-recognised strategies for communicating the validity of interpretations and claims in research reports. These include cross-examining multiple sources of data or evidence, using negative cases, member checking, outsider audits, and so on. In terms of research online, employing communicative validity measures can actually be facilitated by the very nature of the online data. For example, data collected over a given period of time can be compared and contrasted with previously archived data from the same chatspace, discussion list or website in order to add further weight to an interpretation. Ready access to negative cases can be provided through search engine functions within the website being studied, or across the Inernet in terms of drawing negative cases from similar sites or services. For example, coming across new terms developed to describe socially censured activity within the eBay community--e.g., deadbeat buyers, snipers, feedback bombing, feedback extortion, being "NEGed" (i.e., receiving a negative rating)--lead me to search the Plastic.com message archives for similar negative cases in a recent analysis of the induction into social cyberspaces that takes place within these two community-oriented and user driven web services. Member checking data interpretations with participants remains as difficult to do in cyberspaces as it is in physical spaces--although I generally find conducting member check discussions via email to be more successful and generative than in physical space because participants can work feedback in around their own schedules, rather than agreeing to meet for discussion or setting a time to be telephoned.

The trustworthiness of a study is concerned with the degree to which a reader can trust and believe in the quality of the study itself (cf., Lincoln and Guba 1985, Denzin 1998). The key to collecting high quality data is constructing a sound and coherent research design (Knobel and Lankshear 1999, Lankshear and Knobel 2000). Other things being equal, having a well-formed and manageable research question, a worthwhile research problem and aim, and a clear plan of what needs to be done will generate data that enable the researcher to address the research question in a full and satisfying manner. Believability depends on the researcher clearly demonstrating that she has collected data that are sufficient for her research needs (and determined in large part by the research question she has asked). Producing a credible study means that the overall coherence of the research question(s), theoretical framing, and data collection and analysis designs are explicit, justified and appropriate.

As with meatspace studies, researchers cannot take what people say at face value, but need to cross-check it with things they have said in the past in order to ascertain the degree to which the participant is or is not "having them on", "pulling their leg" or generally providing misinformation. The need for demonstrating that the data collected is credible underscores the importance of the online researcher establishing a trusting rapport with participants, so that ethical activity and respect are iterative and reciprocal. Credibility takes on additional dimensions when data collected online is involved. This is not so much because readers can often access the very data used in the report to check and verify claims and interpretations made by the researcher, but because this accessibility is assumed by researchers and readers alike (except where non-archived chatspace is involved) and generally is treated as another (potential) verification checkpoint within a study. Herein lies an interesting paradox. Despite general and widespread recognition that the internet is an amorphous, ever-changing network, when the data used in a study has been removed or is no longer archived or accessible for one reason or another, the credibility of a study can be thrown into disarray.

Colin Lankshear and I recently ran into this very problem in a chapter we wrote for a journal in which we reported our on-going study of the British National Grid for Learning, or "Grid" for short. The Grid is a network of hardware, software and websites that forms the lynchpin of the government's push towards technologizing all four countries within the union and creating an advanced "learning society" (Blair 1999: 1). In the year or so that we had been observing the online development of the Grid portal--a website of categorised links that acts as a launch pad for Grid users--very little had changed in terms of the website design and the way in which content was presented and organised. However, in the past month the website has been completely revamped and reorganized. This held huge implications for critiques we had written about the Grid and that were about to go to press. Indeed, we felt our critiques were made worthless by the changes because the very things we were criticising--although still very much a part of the website, just located in different areas--were no longer on the Internet where we said they were. For example, the front page we described in order to ground later discussions, now looked completely different. We had no hope of claiming our data were trustworthy when not even our descriptions of he website would ring true should readers of our text new to the Grid visit the website and find the very first page completely different to what we described (and thus naturally throwing into question our subsequent claims and discussions). Our only option was to rewrite much of the original descriptions. Of course, not everyone has the luxury of patient publishers like we had in this case; but it does raise an interesting paradox concerning the widely recognised ephemeral nature of the Internet and research credibility in readers' eyes.

Finally, in terms of researchers and their craft, the accessibility of the Internet also has the potential to generate an "Everyone's an Expert" syndrome, where researchers assume that spending a little time in this chatspace and on that discussion list qualifies them to write at length about online practices. Appropriate representation of interactions or life online requires the researcher to be able to distinguish between different kinds of interactions. For example, if a rant--an extended, always passionate monologue about a usually-narrow topic that is of almost obsessive interest to the author--is equated with a flame, which in turn is equated with an ongoing feudal exchange between two participants, and then with direct contributions to the discussion, and all are treated as equivalent within analysis and reporting, interpretations cannot provide fair or even an accurate account of what took place within the studied space. For example, in my ongoing study of Plastic.com--an online, user-generated news and commentary service--it took extended reading over time of user comments to establish which comments were targetting the news items under discussion at the time, and which were actually part of ongoing subtle (and not so subtle) attacks on specific users. Without being able to distinguish between the two types of posting, I could have portrayed Plastic as a relatively acrimonious interactional space, which is actually far from the case.

Research supervision

Increasingly in Education, research supervisors are expected to take on more research students, ensure these students graduate, and continue with their own teaching, researching and publishing efforts. More and more in countries like Australia and the US, education department or faculty funding is tied directly to research student enrollments and graduation rates. As Colin Lankshear and I have written elsewhere (1997), many postgraduate Education students come to qualitative research from undergraduate teaching degrees which are often content-dominated, have been short on ìmeta levelî teaching and learning, and where prior exposure to serious engagement with research methods and literature often approximates to zero. Undergraduate degrees in other areas tend to draw directly on primary theories (Gee 1996), and disciplines (e.g., Sciences, Humanities/Arts) are expected to provide lengthy and, ideally, deep exposure to core theory, conceptual-analytic procedures, research methods, and voluminous research-based literatures, undergraduate teacher education degrees have different priorities (Lankshear and Knobel 1997).

Research supervisors within Education thus need to pay extra attention to the knowledge base of their students and to ensure that these students know how to engage in online research that is theoretically and methodologically informed and coherent, well-designed, rigorously conducted, and so on. Indeed, online research with its relative ease of access to well-defined groups of people or sets of texts, the abundance of data and the flexibility opened up by easy access via any computer almost anywhere, and the appeal of investigating cyberspace per se because it has a default "cutting edge" feel to it, risks lulling supervisors into sanctioning "smash and grab" student research because other pressures take attention away from overseeing each studentsí research planning and design processes, ensuring that students are paying full and careful attention to their own ethical responsibilities as researchers, checking students are sure that the site or community they plan to study will not suddenly disappear before their data collection has been completed, and apprenticing students to conducting theoretically informed research that addresses a genuine problem and/or set of well-formed and sound research questions. Part of supervisorsí responsibilities towards their research students is to spend some time themselves online becoming familiar with the range and kinds of social practices, texts and interaction patterns found there.

Consumers

Consumers of research--i.e., those for whom the research has use value--include the researcher and her wider community of inquirers, theorists, and commentators; participants; groups of people who have a stake or vested interest in the phenomena under study (e.g., schools, parents, students, community, teacher educators, education departments, the media, etc.); and organisations which have identified a research "need" and provided funding for researching it (e.g., universities, local, state and federal bodies/agencies). Within education, the large number of research consumers that need to be taken into account can place additional pressures on researchers to look at only what is happening in schools technology-wise. Unfortunately, however, the most widely valued kinds of kinds of new technology uses (e.g., higher order thinking, innovation skills, design literacies, computing technical knowhow) tend to be those that young people engage with outside school (Alvermann 2002, Gee forthcoming, Lankshear and Knobel forthcoming). The needs of consumers of research who have vested interests in the studies conducted by academics and consultants, either because they are funding the studies, or participating in it, or hope to gain educationally from it, generates a number of ethical dilemmas for education-oriented online researchers who need to decide how far to participate in research that focusses on technology in education contexts, and to what extent online research should be conducted outside school contexts so that education can be brought more closely into line with what young people can already do, as well as will need to be able to do and be once they have left school.

For example, one ethical issue of increasing concern involves decisions concerning what to research in education. In the US at present, for example, websites devoted to teachers and students that present testing and practice exercises aligned directly with national and/or specific state education standards are beginning to proliferate around the nation. Schools are investing heavily in online services that automatically assess students' essays, test reading comprehension, in web-based lesson plan generators (complete with state or national standards indicators and assessment rubrics), in learning portals similar in kind to the UK's National Grid for Learning, and so on. Most of these applications merely automate existing classroom practices (e.g., multiple choice tests, spelling tests, busy work sheets, assessment and evaluation), with little to recommend them in terms of real engagement with important forms of self-directed learning, high order thinking, research skills, information evaluation, and the like. The funding available for studying the take up and use of these technologies is on the rise; however, researchers interested in the ways in which new technologies can be used to address existing inequities between certain groups of children will need to reflect carefully on how to best research these applications without contributing further to maintaining existing school-based inequities among children. This can become particularly vexing when permission to conduct research in a school is predicated on an evaluation of a web-based or computerised learning system in which the school has invested heavily. Indeed, ethical approaches to studying new technologies, cyberspaces and education require the "end users" or targetted consumers of the research outcomes to be factored into the project right from the start.

Conclusion

To sum up, researching cyberspaces do bring with them a distinct set of ethical issues that a researcher needs to attend to while planning and designing a project, while conducting it, and while writing up and disseminating it that are in addition to the ethical concerns found in meatspace. For every ethical rule someone puts forward, someone else can find a situation online where the principle cannot possibly hold (e.g., the principle of ensuring anonymity, or the principle of obtaining informed consent). Running through the front-end, in-process and back-end points of ethical consideration discussed so far have been at least three key precepts or maxims that I find particularly useful in guiding ethical decision-making within my own research. These are:

Maxim 1: Do no harm

Maxim 2: Be informed, honest, and open

Maxim 3: Be prepared, and practice ongoing reflection in relation to the research process

Maxim 1: Do no harm

This first maxim holds across the board and is easily applied to research decision-making (as well as to other spheres of conduct). If there is a likelihood that a research study or data collection tool etc. may inflict harm of any kind (e.g., physical, psychic, emotional, mental) on someone, then the study should not be done or the tool used. This maxim calls for researchers to make the research study "unfamiliar" to themselves while assessing the potential for harm, and to think through the possible consequences of the studyís approach, the kinds of data to be collected and generated, and what will be done with the data in terms of reporting. Treating others well includes being always courteous, and practising genuine reciprocity whenever possible.

In short, this first maxim calls for the practice of an ìethical wisdomî and a general, demonstrated respect for others that draws directly on knowledge of ethical problems others have encountered in their online (and offline) research, how these problems came about and how they were or could have been resolved or avoided altogether.

Maxim 2: Be informed, honest, and open

Honesty and openness are always the best policy where online research is concerned. This includes advertising oneís researcher status role to study participants and non-participants alike within the targetted online community. It also call for researchers to post contact details in open and accessible ways so that participants and non-participants may ask questions at any time about the research process. Honesty and openness also extends to the fair and respectful representation of the study context and study participants, as well.

Maxim 3: Be prepared and practice ongoing reflection in relation to the research process

Simply attending to front-end concerns is never enough where ethical research conduct is concerned. Paying constant attention to the key points of potential ethical concern and to the bearers of moral consequences associated with each study is crucial to ensuring to the best of oneís ability that the study has been designed, implemented and written up with all due attention to the well-being of others, to the betterment of education as a field, and to oneís own development as an ethically aware researcher.

An insistence on developing a set of hard and fast ethical rules or codes will most likely contribute to an ethical checklist mentality among many researchers as they tick of one rule after they other that they have followed in the course of a project. Or, it may generate a rash of positivistic studies of online behaviour as researchers turn to controlled experiment type research in order to ensure all possible ethical considerations have been well addressed. Following a set of maxims or principles instead provides online researchers with the kind of moment-by-moment flexibility they need in order to respond to ethical points of concern as they arise, and forces researchers to be self-reflective and self-monitoring practitioners of their craft.

Bibliography

Alvermann, D. (Ed.) (2002). Adolescents and Literacies in a Digital World. New York: Peter Lang.

AOIR (Association of Internet Researchers) (2001). AOIR Ethics Working Committee: A Preliminary Report. Online. <http://aoir.org/reports/ethics.html> (16 March 2002).

Blair, T. (1999). Foreword. National Grid for Learning: Open for Learning, Open for Business. The Government's National Grid for Learning Challenge. London: DfEE. Online. <http://www.dfee.gov.uk/grid/challenge/foreword.htm> (21 March 2001).

Bruckman, A. (2001). Ethical Guidelines for Research Online: A Strict Interpretation. Unpublished position paper. Online. <http://www.cc.gatech.edu/~asb/ethics> (28 February 2002).

Bush, G. (2002). Economic Statistics Briefing Room. Online <http://www.whitehouse.gov/fsbr/income.html> (accessed 12 April 2002).

Carspecken, P. (1996). Critical Ethnography in Educational Research: A Theoretical and Practical Guide. New York: Routledge.

Cavanagh, A. (1999). Behaviour in Public? Ethics in Online Ethnography. Cybersociology. 6. Online. <http://www.socio.demon.co.uk/magazine/6/cavanagh.html> (28 February 2002).

Denzin, N. (1998). The art and politics of interpretation. In N. Denzin and Y. Lincoln (Eds.), Collecting and Interpreting Qualitative Materials. Thousand Oaks, CA: Sage. 313-344.

Dery, M. (1995). Flame Wars: The Discourse of Cyberculture. Durham, NC: Duke University Press.

Dibell, J. (1998). My Tiny Life: Crime and Passion in a Virtual World. New York: Owl Books.

eleven (2001). encyclopedia. Online <http://www.eleven.i-p.com/encyclopedia.txt> (17 March 2001).

Fetterman, D. (1989). Ethnography: Step by Step. Newbury Park: Sage.

Frankel, M. and Siang, S. (1999). Ethical and Legal Aspects of Human Subjects Research on the Internet. Washington, DC: American Association for the Advancement of Science.

Gee, J. (forthcoming). Literacy and Learning in Video and Computer Games. Unpublished manuscript version.

Goffman, E. (1963). Behavior in Public Spaces: Notes on the Social Organization of Gatherings. New York: Free Press/Macmillan.

Goffman, E. (1974). Relations in Public: Microstudies of the Public Order. Harmondsworth: Penguin.

Ess, C. (2001). Report on Internet research ethics. Humanist Discussion Group. 15(357). Online. http://lists.village.virginia.edu/lists_archive/Humanist/v15/0341.html> (28 February 2001).

Johnson, D. (2001). Computer Ethics. 3rd edn. Upper Saddle River, NJ: Prentice-Hall.

Knobel, M. and Lankshear, C. (1999). Ways of Knowing: Researching Literacy. Newtown, NSW: Primary English Teaching Association.

Knobel, M. and Lankshear, C. (2001). Maneras de Ver: El Analisis de Datos en Investigacion Cualitativa. Morelia: Instituto Michoacano de Ciencias de la Educacion.

Kvale, G. (1994). Validation as communication and action: On the social construction of validity. Paper presented at the Annual Meeting of the American Education Research Association, New Orleans, April 4-8.

Lankshear, C. and Knobel, M. (1997). The Moral Consequences of what we construct through research. Paper presented at the Australian Association for Research in Education Annual Conference, November. Brisbane. Online. http://www.oocities.org/c.lankshear/moral.html>

Lankshear, C. and Knobel, M. (forthcoming). New Literacies, Changing Knowledge and the Classroom. Buckingham: Open University Press.

Lather, P. (1991). Getting Smart: Feminist Research and Pedagogy With/in the Postmodern. New York: Routledge.

Lincoln, Y. and Guba, E. (1985). Naturalistic Inquiry. Beverley Hills, CA: Sage.

NESH (National Committee for Research Ethics in the Social Sciences and the Humanities) (2001). Guidelines for research ethics in the social sciences, law and the humanities. Norway. Online. <http://www.etikkom.no/NESH/guidelines.htm#guide> (16 March 2001).

netgrrrl (12) and chicoboy26 (32) a.k.a. Knobel, M. and Lankshear, C. (2002, in press). What am I bid?: Reading, writing and rations and eBay.com. In I. Snyder (ed.) Silicon Literacies. London: Routledge-Falmer.

Pastore, M. (2001). Online consumers now the average consumer. Cyberatlas. 12 July. Online <http://cyberatlas.internet.com/big_picture/demographics/article/0,,5901_800201,00.html#table> (accessed 12 April 2002).

Reid, E. (1996). Informed Consent in the Study of On-Line Communities: A Reflection on the Effects of Computer-Mediated Social Research. The Information Society. 12(2): 169-174.

Rushkoff, D. (1994). Cyberia: Life in the Trenches of Hyperspace. San Francisco, CA: HarperCollins.

Rust, M. (1997). Gen-X Seer Is Attacked by Alternativist Hysteria. Washington Times. Monday, August 4. Section: Culture. 22. Online. <http://www.rushkoff.com/washtimes.html> (17 March 2002).

Spinello, R. (2000). cyberethics: Morality and Law in Cyberspace. Sudbury, Mass.: Jones and Bartlett.

Steinberg, S. (2002). Response to the Research Methodology and Social Practice in Online and Offline Spaces: The Challenge of Digitization Symposium. Paper presented at the Annual Meeting of the American Educational Research Association. New Orleans, 3 April.

Thomas, J. (1996). Introduction: A debate about the ethics of fair practices for collecting social science data in cyberspace. The Information Society. 12(2): 107-117.

Turkle, S. (1997). Life on the Screen: Identity in the Age of the Internet. New York: Touchstone Books.

Victory, N. and Cooper, K. (2002). A Nation Online: How Americans are Expanding their Use of the Internet. Washington, D.C.: National telecommunications and Information Administration, the Economics and Statistics Administration, and the U.S. Census Bureau.

Warnock, G. (1970). The Object of Morality. London: Methuen.

Warschauer, M. (2002). Technology and Social Inclusion: Rethinking the Digital Divide. Cambridge, M.A.: MIT Press (in press).

 

back to AERA symposium / back to work index / back to main index