Abstract
Automation has crept into our daily lives: machines may tell us when our pizzas are
ready, answer incoming phone calls when we are away, govern how we drive our cars, and
indicate where we have to go to arrive at our destination. According to some, this is only
the beginning.
People, however, rarely know the ins and outs of the systems they encounter in daily
life. Few, for example, know how their car navigation sets work, beyond the notion that it
has "something to do with satellites". Not being able to fully understand such a system and
assess its capabilities may constitute a major obstacle in the use of system advice. Trust is
assumed to be a mechanism that enables people to deal with such situations of uncertainty
or risk, and as such, is crucial in a user's decision to rely on system advice. This research
project aimed to gain insight in the factors that underlie the formation of system trust, and
to understand its effect on a user's decision to rely on a system to perform tasks or to follow
its advice.
A series of experiments was conducted in which participants had to work with a route
planner. This route planner generated routing advice, and the trust of the participant in
the system was measured. In a number of experiments, participants could decide whether
they wanted to plan the routes themselves, i.e., plan manually, or whether they wanted to
delegate the planning task to the system, i.e., engage in automatic mode. In this way, the
behavioural consequences of trust could be determined.
The role of direct information has attracted a lot of attention from researchers in the
field of system trust. Usually, trust is manipulated by the occurrences of failures in system
output. The first experiment reported here, in Chapter 2, followed this line of research by
manipulating failure rates in manual and automatic mode. However, contrary to many
experiments, the number of previous interactions did not differ between modes. The results
showed system trust and self-confidence to be affected by failures in automatic and manual
mode, respectively. In turn, system trust in particular influenced whether people selected
manual or automatic mode when they could choose freely, although they also displayed a
preference for manual mode.
Users can also obtain trust-relevant information from sources other than personal
experience, i.e. indirect information, such as the opinions of others or analyses in
consumers' magazines. The first experiment reported in Chapter 3 showed that the overall
valence of an evaluation exerted a considerable influence on trust. In other words, a
positive evaluation caused an increase in trust, whereas a negative evaluation led to a
decrease in trust. A follow-up experiment subsequently showed the provided consensus
information to affect both trust and the use of the automatic mode. A favourable opinion
concerning the system that was endorsed by a small group of people was shown to exert a
negative influence on both trust as well as the use of the automatic route planning mode,
contrary to the same opinion endorsed by a large group. These experiments show that
trust-relevant information may be processed differently. Activation and application of the
heuristic "consensus opinions are correct" upon perceiving consensus information probably
caused participants to believe the opinion endorsed by a majority, in contrast to an opinion
endorsed by a minority. The evaluation supplied in the first experiment was processed
more elaborately, or systematically, causing trust ratings to correspond with the overall
valence of the message.
In Chapter 4, experiments are described that test whether system behaviour may also
convey information when clear outcome feedback, i.e. failure messages concerning the
quality of the automatically generated routes, is not available. Possibly, inferences are
made based on the mere appearance of automatically generated routes that are displayed
on the screen, i.e. process feedback. In addition, as people may often have multiple types
of information available to help them form trust, this process feedback was pitted against
consensus information. The first experiment showed that the absence of process feedback
led to a somewhat reduced effectiveness of the consensus information, whereas no such
reduction was found when it was available. Arguably, the process feedback, which was
rather random in appearance, had necessitated the continued use of the consensus
information to further the interpretation of the feedback in terms of quality. The second
experiment showed that random process feedback caused a sustained effect of consensus,
whereas consistent process feedback led to a cancelled consensus effect. These findings
were supported by the third experiment. Manipulations of the face validity of process
feedback, furthermore, proved to have an additional effect: routes with high face validity
that displayed consistency or randomness led to higher trust than those that were consistent
or random but had low face validity. These results suggest that consistency in process
feedback allows for inferences being drawn about how the system operates, thus, creating a
sense of understanding, which increased trust. Random process feedback, on the other
hand, hardly allows for such inductions. In other words, the information obtained from
consistent process feedback probably competed with the less-informative consensus
information, causing the latter to be overruled. Contrarily, randomised routes do not yield
trust-relevant information, which may explain the sustained effect in the case of random
process feedback.
In Chapter 5, an experiment is reported that aimed to examine whether the process of
drawing inferences depended on the participants' motivation. In the highly motivated
group, the influence of the consensus information proved to be less strong than in the lowmotivation
group. Additionally, highly motivated participants reported higher trust levels
in the random process feedback conditions than participants with low motivation. This was
not observed in the consistent process feedback condition, however. Apparently, the
inference of system rules from consistent process feedback was so easy that there was
hardly any information to be gained for highly motivated individuals.
Taken together, these results suggest that people may use any information available to
form a trust judgement. This information may be straightforward, such as failure messages,
or a list of positive or negative arguments in an evaluation. However, users may also
engage in inductive inference based on observed system behaviour, without any verifiable
indicators of output quality available.
The results of these experiments have consequences for the interpretation of the concept
of system trust regarding to distinctions between trust and confidence. First, all the
experiments reported here concerned situations of considerable uncertainty, and
distinctions based on the assumption that confidence implies certainty, as opposed to trust,
therefore, justify the use of the label trust. Another distinction concerns differences in the
information that lies at the bases of trust and confidence. According to this distinction,
trust concerns agent-agent interactions, and is based on social relations, group
membership, intentions and shared values. Confidence, on the other hand, deals with
agent-object relations, and is based on experience and, thus, on perceived competence.
This distinction would imply that, contrary to interpersonal trust, system trust, which
concerns interactions with an object, is based on perceived competence, inferred from
system behaviour. The results presented here, however, illustrate that an assessment of a
system's competence can be based on multiple types of information. Next to past system
performance, competence turned out to be influenced by other information as well. Indirect
information, in the form of evaluations and consensus information, was also shown to affect
trust ratings, through both systematic, as well as heuristic processing. Competence,
therefore, does not require behavioural input.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 20 Jan 2005 |
Place of Publication | Eindhoven |
Publisher | |
Print ISBNs | 90-386-2157-4 |
DOIs | |
Publication status | Published - 2005 |