CS854 Commentary -- Week 1: Introduction - What and Why?


  • All TA comments written in red.

    In the article, How Computer Systems Embody Values, Nissenbaum argues that technology can have values and cites the technological ability to "capture trivial bits of data to aggregate, mine and analyze them," as a threat to privacy that society must deal with. While the latter statement may be true, this is not a description of how technology (1) embodies values but more of a description of how society can perceive values to be present in the application of technology.

    While some applications lend themselves more easily (2) than others to human-implemented activities that do contain ethical and moral values, the values are contained in the applications themselves rather than being inherent to the technology.

    Perhaps the most extreme example of this could be a hypothetical working computer-based artificial intelligence meant to mimic any thoughts or emotions of a real human being. Would the individual perceptions, neural networks or whatever other technology used to implement this AI embody values? That would be like arguing that electrons can embody values because they can be manipulated in such a way to provide energy to many of our modern day devices. (A notion I feel preposterous with an answer so self-evident that I don't bother exploring it any further.)

    Then would the intention of whoever created this AI have them? Following this line of reasoning further, it could be argued the individual neurons and other cells in the brain have no values, but yet the human mind does (3a). Since the mind is merely the application constructed out of the these cells and neurons, then its values, if any, can only really be known by who or whatever created had in mind. Now given the strong evidence presented for Darwin's theory of natural selection, the implication is that there are no intended values or perhaps even random ones (3b). A distressing thought indeed!

    I return to my original thesis. Despite the purpose of this AI (and the author implying that it has the ability to think and act on its own) any values ascribed by society in this technology would not actually exist in it, but merely perceived to be there through how society interacts with it. While I do concede the point that people can feel that some things can have inherently internal values and that this commentary merely shifts the location of where these values actually reside (in the person's mind, not the object).

    But I digress. In summary I believe that non-sentient things (and especially inanimate (4) objects) are value-neutral and these values are only perceived through how people interact with them, much like Max Weber's (5) symbolic interaction theory.


    You have some interesting arguments and original ideas. Not all your arguments convinced me - see the comments below.

    (1) I think you are right that a piece of technology cannot have values in the same sense as a person does; for example, my computer cannot be said to "value" privacy. However, what Nissenbaum means when she says that a piece of technology embodies values, is that the way that the piece of software (or hardware) is designed influences the way it can be used. The use of that technology might involve respecting or compromising certain values. I agree with you that values are something that we perceiver, but which values we perceive will be influenced by the technology itself. To use the data mining example you quoted, imagine two different software systems for ordering groceries online: One software system allows a user to login, select groceries and then pay by giving a credit card number. After the user's order is filled all the data is destroyed. The second system has the potential to allow the user's privacy to be violated, whereas the first system does not.

    (2) Exactly -- if one software system lends itself more easily to a certain application in which a value must be considered than another software system, then Nissenbaum would say that the difference reflects a difference in values embodied by the software systems.

    (3) In what sense does the human mind have values? On the one hand, it sounds to me as if you believe that the human mind has some value in and of itself, but then later you state that its values (if any) are determined by the intentions of its creator (if any). If you are stating that the mind has inherent value, but the neurons and electrons of which it is composed do not, then perhaps values can only be present in complex systems. Computers, or software are also complex systems, so some people would argue that values can emerge in complex systems, either artificial or biological (this is a big question of debate in philosophy of mind). If you are arguing that the only values that the human mind can be said to embody arise as a result of its creation, I agree with you that if the theory of evolution is correct it is difficult to see what those values might be. HOWEVER, what we can be absolutely sure of is that human beings, you can readily identify the creator(s) of computer technology. According to the argument that you made about the human mind, we could conclude from this that computer technology definitely embodies values, since its creator certainly had something in mind when designing and making it.

    (4) As I said in (1), I thing you are using the word value in two different senses. Non-sentient, or inanimate objects definitely do not have the ability to "value." When we say that technology embodies values, we mean that there is something about the way that technology is constructed which affects the way it van be used, and through the use, certain values might be promoted or compromised.

    (5) I am not familiar with Max Weber's theory (aside from the name), but a sociology expert I know tells me that this could be a possible application J

    8/10

    Back