Synthetic emotions could make computers nicer
Synthetic emotions could make computers nicer
When today's users respond emotionally to a computer, they typically call it unprintable
names, perhaps hold down all the keys and maybe contemplate throwing it out a
window. But such unpleasantness could be a thing of the past if projects at Stanford
University and at the Massachusetts Institute of Technology Media Laboratory bear
fruit. Researchers are studying how to make people feel happy about the relationship
between man and machine—and how to make computers more soothing when they
detect frustration. The approach has started to attract serious attention from computer
and software designers—as well as criticism that it is misconceived and ethically
questionable.
The new interest in how people feel about computers, as opposed to simply how they
use them, has been driven in large part by Byron Reeves and Clifford I. Nass of
Stanford, who have long studied how people respond to what Nass is happy to call a
computer's personality. Reeves and Nass have shown that even computer-literate
people respond emotionally to machine-generated messages they see on a screen, as
well as to apparently irrelevant details, such as the quality of a synthesized voice. Their
responses are much like those that would be elicited by a real person.
An unhelpful error message, for example, elicits the same signs of irritation as an
impolite comment from an unlikable person. Such involuntary and largely unconscious
responses have potentially important consequences. Users engage in gender stereotyping
of machines, for example, being more likely to rate a "macho" voice as authoritative
than a female one. Users also enjoyed interacting better with a screen character of their
own ethnicity than with a character portrayed differently. Because so many people today
spend more time interacting with a computer than with other people, hardware and
software designers have a keen interest in such issues—as the imposing list of
corporate sponsors supporting Reeves and Nass's work testifies.
At M.I.T., Rosalind W. Picard and her students are trying to take the next step—giving
computers the power to sense their users' emotional state. Picard is convinced that
computers will need the ability to recognize and express emotions in order to be
"genuinely intelligent." Psychologists, she points out, have established that emotions
greatly affect how people make decisions in the real world. So a computer that
recognized and responded to emotions might be a better collaborator than today's
insensitive, pigheaded machines.
Detecting emotions is difficult for a machine, especially when someone is trying to
conceal them. But Picard says she has at least one system "that definitely looks useful."
The apparatus detects frowning in volunteers who are asked to perform a simple
computer-based task and are then frustrated by a simulated glitch. The setup monitors
the frown muscles by means of a sensor attached to special eyeglasses. Other studies
she has conducted with Raul Fernandez have achieved "better than random" detection of
frustration responses in 21 out of 24 subjects by monitoring skin conductance and blood
flow in a fingertip. Picard's work, too, has attracted industry interest.
Jonathan T. Klein, also at M.I.T., is building on Picard's results to try to make friendlier
digital helpmates. Klein is testing strategies for calming down frustrated users. Klein's
system may, for example, solicit a dialogue or comment on the user's annoyance
sympathetically without judgment. (These strategies were inferred from observations of
skilled human listeners, according to Klein.) Nass suggests that computers might one
day detect when a user is feeling down—and try to adapt by livening things up.
But the notion that computers might respond emotionally—or what psychologists call
"affectively"—itself causes frustration in Ben Shneiderman, a computer-interface guru
at the University of Maryland. Shneiderman says people want computers to be
"predictable, controllable and comprehensible"—not adaptive, autonomous and
intelligent. Shneiderman likens an effective computer interface to a good tool, which
should do what it is instructed to do and nothing else. He cites the failed "Postal Buddy"
stamp-selling robot, the extinct talking automobile and Microsoft's defunct "Bob"
computer character as evidence of the futility of making machines like people. And
there are significant ethical questions about allowing people to be manipulated by
machines in ways they are not aware of, Shneiderman contends.
Picard, though, says her studies address only emotions that people do not try to hide.
And Nass, who acknowledges Shneiderman's ethical concerns, notes that Microsoft
Bob's digital progeny are alive and well—as the humanoid assistants, such as
"Einstein" and "Clip-It," that dispense advice in Office 97's built-in help system.
Machines are already becoming more polite, Nass states, and more friendliness is on
the way.
Tim Beardsley in Washington, D.C.
|