Current developments towards Ambient Intelligence and related technological visions of the future are founded on continuous collection of information about individuals and their activities. This collection of information, its potentially persistent storage, dissemination and use raise privacy concerns. In the debate surrounding privacy and Ambient Intelligence, this thesis takes a user centered perspective examining the attitudes, preferences and behaviors of people regarding disclosure of information. Furthermore, it uses these insights to guide the design of interfaces for managing one’s related privacy needs. The work presented in this thesis comprises three study domains. First, privacy concerns and disclosure behavior in relation to a music recommender system were investigated. It was anticipated that people would weigh the costs and benefits of their disclosure decisions and act accordingly. Also, it was expected that personality traits would be considered more personal or private than music preferences, resulting in lower willingness to disclose such information. However, it turned out that despite the fact that personality traits indeed were considered more personal than music preferences, disclosure levels were similar. This study also demonstrated the methodological difficulties of studying privacy. Despite the efforts to provide a true music recommendations service, participants felt protected somehow by the context of the research. This study showed that it is not sufficient to just consider privacy disclosure as a trade-off pertaining to the personal value of information but that other factors regarding the context of disclosure play an important role, e.g., recipient, assumed use of information, level of anonymity, or study context. The second domain investigated legal principles or guidelines as a basis to inform users regarding privacy consequences in four related studies. Two pilot studies showed that comprehension of privacy guidelines is poor. Another study on the differences in interpretation between video- and text-based scenarios showed poor understanding of the compliance with privacy guidelines as well. Text-based scenarios describing privacy guidelines were better understood than video-based scenarios. Furthermore, it turned out that comprehension can be improved when video-based scenarios are provided before the text. Also, the relative importance of each guideline was evaluated. Participants were offered pairs of privacy guidelines in a health care context, and were asked to indicate their most preferred guideline. Participants particularly valued to have access to their own personal data (Insight), followed by having information about the other parties that have access to their information (Openness). The last domain concerned interface solutions for managing privacy preferences of users. In this study participants compared three interfaces based on three different conceptual models regarding the way personal information is managed. The first conceptual model of Self-Representation takes a social psychology perspective focusing on how individuals manage the presentation of their ‘self’ through selective disclosure of information. The second conceptual model of Information-Use enables users to specify which types of information may not be disclosed for specific purposes of use. The third conceptual model of Split-Dimension enables users to specify approved levels of information disclosure split across three separate dimensions: recipient, purpose of use, and type of information. Overall, the interface based on Self-Representation was judged best with regard to the five attributes: trust, risk, usefulness, ease of use and intention to use. The interface based on Split-Dimension seemed slightly easier to use than the interface based on Information-Use, but apart from that these two interfaces were perceived to be similar. There were large differences regarding the perception of the three interfaces across participants. Based on these differences four clusters of participants were distinguished. Three clusters appreciated the conceptual model based on Self-Representation, but they differed in the extent to which the other two conceptual models were appreciated. There was one rest cluster which did not have a clear preference for any of the interfaces. Also, in this study a model was evaluated in order to determine the relative importance of trust, risk, usefulness, and ease of use on intention to use. Risk did not have a significant relation with intention to use, whereas usefulness had the highest impact on intention to use. Trust and ease of use both had small influences on intention to use. Apparently even in the context of privacy interfaces trust, usefulness, and ease of use are more important than perceived privacy risks. Based on the work presented in this thesis it can be concluded that any system that is meant to provide users control over their personal information in a personalized context should give users insight into their own personal data. Furthermore, it should inform users about which persons and parties have access to their information. Users should be protected not to disclose information that is anyway regarded as personal, since they may not realize this at the moment of initial use of the system. Systems for setting personal preferences for disclosure of information should allow people a great deal of control. However, it should be easy to achieve this level of control, by the existence of default settings that are preferably protective of the users’ personal information.
|Qualification||Doctor of Philosophy|
|Award date||19 May 2009|
|Place of Publication||Eindhoven|
|Publication status||Published - 2009|