2. TOWARDS SOME DEFINITIONS
2.1 Why do we need definitions?
All these comments from the literature suggest the researchers are concerned about how participants feel, and want to respond by asking about ethical behaviour as researchers or interviewers. But there is much inconsistency in the claims made about what kind of emotional or psychological harm that can be caused, about what the researcher and participant can achieve in an interview situation, and about what moral obligations researchers have, and what ends these 'shoulds' are supposed to serve.
In ethical discussions, I believe it is useful to be as specific as possible about what aspect of the situation is causing concern. What is the precise nature of the abuse or harm? What is being taken from whom? What prevents the 'victim' from counteracting the person who is claimed to be taking advantage? These are not questions I propose to answer in this paper, but I do think they need asking. And before those questions can be answered, we need to be much more precise about what we mean by some of the terms that have been used in the literature on research ethics.
In fact, this exercise may take us some way to resolving some of the problems. As Maclntyre (1966:2) wrote: "To understand a concept, to grasp the meaning of the words which express it [...] is always at least to learn what the rules are which govern the use of such words and so to grasp the role of the concept in language and social life." If we cannot reach greater clarity and wider agreement on what we are going to mean by these key words, then I see little point in them being used at all as a way of deciding the right courses of action in research relationships.
2.2 Two meanings of 'responsible'
Underlying much of what has been quoted is the issue of responsibility. In an enquiry into the ethics of research, it might seem reasonable to ask: "To what extent is the researcher responsible for what happens to the participant?" But I think there may be some clumsiness in how this word gets used.
Paul and Paul (1983:211) suggest that, traditionally, "responsible people are those who behave 'properly', and irresponsible people those who behave 'improperly'" according to the values, beliefs and expectations of a given culture. That looks simple enough, but it only encompasses the moral dimension of the word 'responsible'. This first meaning is used in statements like 'I should do (or not do) x' ('should' statements).
It is not always clear how the word 'should' is to be taken. Certain 'should' statements could be described as 'technical': 'You should wipe your feet before entering the house (so that the floor does not get dirty)'. However, 'should' is also used in moral injunctions, statements about what is incontestably right action around other people for achieving the 'good life'. In fact, the technical example I gave, like all technical 'shoulds', ultimately only has force if an appeal is implicitly made to the 'good life': 'If the floor gets dirty, I have to clean it, which makes me unhappy, and that is bad. So, you should wipe your feet.'
The second meaning of responsible comes in statements of the form 'I committed the act'; and a subset of this category would be, 'I caused x to happen'('is' statements).
The confusion lies in how people get from 'is' to 'should'. Consider this scenario: 'You are responsible for this happening. Therefore you are responsible for putting it right'. These are two quite different statements. In translation, it would read: 'You caused this situation, therefore you should put it right.' However, 'Things are thus' can never be used as a justification for 'Things ought to be thus'. Put another way, Lewis (1943:23) wrote: "'This will cost you your life' cannot lead directly to 'do not do this': it can lead to it only through a felt desire or an acknowledged duty of self-preservation."
Since values are needed to link 'is' to 'should' statements, problems arise if researcher and participant disagree on a particular value (e.g. whether it is 'wrong' to be distressed in an interview): they will then disagree on what ought to happen. The same kind of disagreement can happen between social researchers trying to establish a set of ethical guidelines for research practice.
Once this ambiguity around 'responsibility' has been identified, we can examine both the 'is' and the 'should' statements in more detail. Cohen and Manion's (1994:371) "principle that subjects ought not to leave the research situation with greater anxiety or lower levels of self-esteem than they came with" implied that the researcher should ensure (at least in part) the emotional wellbeing of the participant. But is the participant not capable of moving out of his own distress without the researcher's help? And, if help is deemed necessary, somebody else's help may serve just as well. If the researcher's presence is ultimately redundant for recovery of emotional or psychological equilibrium, then why 'should' she help? And if it is possible for the participant to regain composure and happiness without the researcher's intervention, then it may bring into question to what extent the researcher was responsible for (i.e. caused) what happened emotionally to the participant in the first place.
2.3 How do we decide what is wrong?
Sieber and Stanley (1988) talk about 'harm', though it is not clear what this is. While it is not difficult to find those who believe, for example, that being upset is undesirable, it is not so easy to see why something undesirable should be classed, by definition, as bad or wrong.
It seems strange to attribute some kind of moral dimension to emotions. Those aspects of ourselves which, for some reason, we try to hide, dwell in what, in psychotherapeutic terms, is variously called the Shadow (Zweig and Abrams 1990) or the unconscious; and it is those areas which we are most likely to condemn as 'wrong'. Certainly, distress is often seen as undesirable by many. But 'undesirable' is not synonymous with 'wrong', at least not to all people at all times. The tyrant sees insurrection as 'undesirable', but that does not make it 'wrong' for the rebels to take him on. I think it is essentially suspect to use our own personal predilections or level of comfort as a definitive way of deciding what we considered morally right or wrong.
So if feelings cannot be used as reliable decision tools in ethics, what else can we call upon? Those precepts we claim to be 'ethical' and absolute, beyond cultural differences or 'boo-hurrah' statements, look no different from ideological statements. And, as Hughes (1995:26) points out: "Ethics is precisely to do with judgments of absolute value, which by their nature cannot be verified empirically as true or false." So if a researcher wishes me to follow a particular precept, she has no logical means by which she can convince me that a particular claim has ultimate legitimacy rather than being something simply that she would like.
In his exploration of the idea of ultimate justification, Wittgenstein (1980:16) provides perhaps some help: "Nothing we do can be defended absolutely and finally. But only by reference to something else that is not questioned. i.e. no reason can be given why you should act (or should have acted) like this, except that by doing so you bring about such and such a situation, which again has to be an aim you accept." So which are the aims we accept? No tears from the participant, no distress? No emotional or behavioural change in the participant at all? Which changes or effects are acceptable, and which are not?
At the same time, Wittgenstein (1921:6.422) seems to make the problem even more intractable: "When an ethical law of the form, 'Thou shalt ...', is laid down, one's first thought is, 'And what if I do not do it?' It is clear, however, that ethics has nothing to do with punishment and reward in the usual sense of the terms. So our question about the consequences of an action must be unimportant." Perhaps then, in considering what is 'right action' for a researcher, we need not look at the consequences at all? What is ethics if it does not include consequences of actions?
Perhaps the way I am presenting these ideas does not look very helpful, in that I am highlighting ambiguity where clarity would be far more desirable. However, as Pirsig (1991:163) explains: "Morality is not a simple set of rules. It's a very complex struggle of conflicting patterns of values." Ethical life has a dynamic quality; while guidelines are useful, they can never cover every eventuality, and prescriptive rules of conduct can dull our ability to respond appropriately to the unique aspects of each new situation.
2.4 Two criteria: 'care' and 'sensitivity'
In that last sentence, I almost wrote 'sensitively' rather than 'appropriately'. It is a word which can appear sometimes in ethical arguments; in an earlier quotation Dawson (1996) declared that "researchers ... must treat [participants] with ... sensitivity". This criterion might be taken as a sign of a 'caring' person. However, Allmark (1995:23) cites the example of a 'good' torturer who must be "sensitive to people's needs in order to deprive them of them."
In his analysis of 'care', Allmark (1995:19) points out the inadequacy of the term: "Caring is not good in itself, but only when it is for the right things and expressed in the right way. 'Caring' ethics assumes wrongly that caring is good, thus it can tell us neither what constitutes those right things, nor what constitutes the right way." For example, I can 'care' about keeping my house spotlessly clean, but if I neglect the needs of those who live with me, I am not a 'caring' person. Clearly much care must be taken if we are to be successful in discussions of ethics!
2.5 Who constructs the situation, and is it natural?
Earlier on, I quoted Frohmader (1996a, 1996b), who spoke of the research relationship as unnatural in that it was constructed by the researcher. If the intended meaning of 'natural' is 'normal' or 'usual' or 'common', then one would have to explain a) why familiar situations were morally more acceptable than the unfamiliar, b) what a 'normal' encounter looked like, and c) in what way the parties were more 'equal' in a 'natural' encounter than in a research context.
As for the researcher constructing a situation unilaterally, it is difficult to imagine how the participant has not played a part in constructing, consenting to and sustaining the situation just as much as the researcher. This second point is especially important, for if any discussion of causal and moral responsibility in a research relationship is to be complete, the role of the participant must be taken into account alongside the researcher's.
A participant will react differently to a researcher's act (or even not at all) depending on what meaning he attributes to it. This is regardless of what the researcher meant by the act, or whether she meant anything by it at all. For example, the researcher may ask a particular question (we don't know why); for the participant to experience distress, he must, at the very least, either a) find his answer distressing in itself or b) for some reason be concerned about what the researcher will think of the answer - and perhaps, by implication, think of him. Arguably, we are only affected by people's verbal judgments (good/bad, right/wrong etc.) about us if we are predisposed to judge ourselves. Otherwise, we disregard the judgments others make about us.
A participant may feel that he cannot not answer, that he may lose face or feel awkward about setting a boundary. Certainly, these could be outcomes. The choice to do something different is there, even if the participant does not believe this to be so. If the participant is not aware of this, that is unfortunate. But, as I have already pointed out, it is not a direct logical step to claim that in such circumstances it is the researcher's obligation to point out any of the other choices. Any researcher or participant who makes that claim, may well be getting into a codependent stance.
Kopp (1974:60-1) forcefully makes this point from his perspective as a psychotherapist:
We have returned again to the 'who is responsible?' question. Perhaps now is a good time to plant a question. If doctors, lawyers and other similar professionals are not held 'responsible' for non-compliance in patients who refuse treatment, or clients who do not follow advice, does it make sense for researchers to be held 'responsible' for participants' compliance and/or any subsequent reactions/emotions?
2.6 Autonomy, freedom and humanity
No discussion of ethics and responsibility can be conducted without looking at what we mean by freedom and autonomy. My own understanding of freedom is that it is not an object. It can only be experienced as it is exercised, "in moments of creativity or, again, in moments of difficult decision." (Macquarrie 1982:13) While freedom itself may not be observable, the products of creativity are. And creativity does not make sense without the presupposition of freedom (the essence of voluntarism). Indeed, autonomy (or freedom) is a presupposition of intentional action, ability to make choices and decisions, rational enquiry and judgment, and moral agency. These qualities are common to all humans, although some may have more information than others on which to base their decisions. Rousseau (1762:55) understood that the exercise of freedom is an integral part of what it means to be human: "To renounce freedom is to renounce one's humanity..."
On the one hand, if autonomy is inherent in our understanding of what it means to be human, this must include all humans, regardless of age, gender, mental ability or emotional instability. But everyday experience makes us (myself included) balk at that. So on the other hand, I wonder if children are in a different category: their psychic boundaries are, perhaps, not fully formed, so they are somehow inherently more vulnerable and in a position of what is sometimes called 'reduced autonomy'.
But other ideas make me question how far to accept this second possibility. For example, when do children stop being children? Are we making the right construct to put vulnerability and autonomy at two ends of a continuum? Where do we place the mentally or emotionally unstable on this continuum? If it is vulnerability which undermines our autonomy, who can we point at and justifiably say, that person is autonomous? It seems that the whole notion of autonomy begins to collapse if we put children or any other group traditionally marked out as 'vulnerable' into a category separate from other humans.
But I am reluctant to discard autonomy as a concept applicable to human beings. Without autonomy as part of our understanding of what it is to be human, you do not have a human being - you have an automaton in a deterministic world. And I do not see how ethics could have any relevance in a deterministic society. Can one point at a single abused child, a prisoner, an oppressed group, and say, 'they are not human' or 'they do not hold ultimate sovereignty over their own thoughts and feelings'? They may feel as though they are not being treated as humans, but that does not stop them being human, however physically constrained or psychologically shredded they might be.
2.7 Respect for persons
I agree with Macquarrie (1982:19) that "All of us find ourselves thrown into an existence which we did not choose and the circumstances of which we did not choose, and it is from that point on that we begin to exercise whatever freedom is open to us." But to regard a participant as having reduced autonomy by virtue of his environment (including society, of which the researcher is a member), far from being compassionate, may demean him. And if a researcher styles herself as the protector of a participant, giving herself licence (in her own mind) to act for him without his request or consent, then that may be simultaneously diminishing him whilst exalting herself.
On what basis can a researcher know what a participant's needs might be and how they can be met? It would seem presumptuous if a researcher believed she was the right person to meet those needs, or even able to do so. As we have seen, to be caring, sensitive or protective may not be respectful at all, but be indicative of controlling behaviour on the part of the researcher to serve her own values or protect herself. 'Respect for persons' might be best characterised by recognition of and trust in the equal ability of researchers and participants to protect themselves.
2.8 Participants
Many of the quotations I have used have referred to the researched variously as 'informants', 'respondents' or 'subjects'. All of these terms seem to place the researched in a very passive role (as does the term 'researched'), a role which seems to neglect the agency of those people the researcher spends time learning about. Throughout this paper, I have adopted the term 'participant'. For me, this helps maintain a dual perspective, reminding me that I am talking about a relationship in which both parties are relating and active in the construction of the situation. It seems both sensible and respectful towards participants to keep in mind that there are two causal/moral agents in the relationship.