Trust, a Cross Disciplinary View

A substantial part of this project is the design of a trust framework: a software system for calculating a metric for trust between entities in a social network. In this post we will begin to explore our understanding of trust by looking at it from various academic viewpoints.

Social networks have been studied since the 1950s, but the idea has expanded dramatically with the arrival of the Web. In the early days, web based social networks were a means of connecting family and friends. The rapid growth of the Web in terms of both population and use cases has led a wide variety of interactions between strangers for a range of different purposes. This increased complexity of interaction types and actor roles creates the possibility of a variety of consequences for individuals using the web, and so trust has become an important factor in the success of social web applications [1].

Broadly speaking, trust is a measure of the confidence that an entity can be relied on to behave in an expected manner. Describing trust, however is a great deal more complex, and has been studied from the viewpoint of numerous academic disciplines, including philosophy, economics, sociology, psychology and computer science[1], [2].

According to Scherchan et al [1] there are three primary disciplines concerned with the definition of trust: computer science, psychology and sociology. Other disciplines draw on these definitions to explore various facets of trust relationships.

In Psychology, trust has been the subject of a large and long established body of research. Psychologists think of trust as a psychological state in which a person is willing to risk being vulnerable to another actor. They think of three facets to the concept of trust:

  • Cognitive trust is the result of rational decision making based on experiences of, and information about, the trustee
  • Emotive, or affective trust is based on subjective emotional feelings about the trustee,
  • Behavioural trust describe the act of an individual in making themselves vulnerable to another. Clearly this requires the antecedence of emotive and/or cognitive trust

Two types of trust are identified, relational and generalised. Relational trust refers to that between two individuals. It has a strong cognitive element based on specific interactions between actors. Generalised trust is placed by an individual in groups of people, institutions and in wider aspects of society [3]. Generalised trust is often seen as a coping mechanism for unfamiliar circumstances and is built using extrapolation from known scenarios; thus people all have varying degrees of generalised trust at different levels of granularity, from a general feeling about humanity as a whole down to ideas about specific groups check out Bandura’s social learning theory book. Levels of generalised trust are less accurate ways of predicting human behaviour, and also do not serve as indicators for levels of relational trust, but rather act as a starting point from which relational trust is built up over time [1].

In computer science, trust is classified as user trust, or system trust. User trust draws for its definition on ideas from sociology and psychology. It refers to trust between human actors in computer mediated communications networks [1]. The definition is more or less the same as the general one we outlined earlier and is used to model trust between individual users for the development of systems such as EBay and amazon. As such, this type of trust is relational in nature, i.e. built up between two individuals over the time they spend interacting. There are two types: direct trust based on personal experiences and recommendation trust, based on the experiences of other people from the network. Recommendation trust is propagative in nature and in modelling it, computer scientists draw on graph theory, using characteristics of the social network graph such as centrality metrics [4] .

In the human/computer interaction sub discipline computer science draws on many other academic disciplines to understand trust. A distinctive aspect of the computer science approach is the emphasis on modelling it and calculating metrics to quantify and communicate it as a value [5].

Ideas about system trust originated from the security domain. It is defined as the expectation that a system will behave in predicable manner such that it can be relied upon to fulfil a specific purpose. In network security there are some interesting systems for modelling trust using recommendation and external verification and although many consider this outside the domain of social network trust we will explore any potential cross-over in a future post.

In Sociology, trust is a kind of social capital: a key concept in sociology, but one few sociologists agree on a definition of for ideological reasons, and because such definitions are context dependent. The Social Capital research website gives an excellent overview of the issues [6], but in the context of online social networking, broadly speaking, social capital consists of the properties and features of a social network  which allow its actors to mobilise collective resources for mutual benefit, adding value to the network. Because of the definitions of social capital involve interactions between actors, trust is seen a key type of social capital, without which a network will struggle to achieve value.

Most sociologists see trust as a psychological state [1] that is built up over time with the repeated occurrence of situations where the trustee has opportunities to betray the trustor at no cost to themselves and declines to do so [3], [7]. This important distinction is made between trust and assurance, the latter being cooperation in the presence of an external agency that can administer negative consequences should the vulnerable party be betrayed [2]. Studies have shown that this repeated risk on the part of the trustor and lack of any consequences for the trustee are necessary components for building trust and that the addition of assurances and mediation by third parties reults in cooperation but no trust build up [7].

Trust in sociology is divided into two main categories. Individual Trust between two actors is similar to the view from psychology in that it involves the vulnerability of the trustor. Societal trust is seen as a property of groups and is a collective psychological state in which group members have expectations about the behaviour of other group members. Viewing trust as a group state rather than is seen as a uniquely sociological way of studying it [8]. It is important to note that a dyad is considered a group, so a trust relationship between two individuals can be seen as a property of them both or as a state of the trustee.

Trust and Interdisciplinarity

The definitions and approaches to trust across the academic disciplines all offer unique insight to the problem of trust in our domain, i.e. trust in online social networks. Computer science offers insights into the modelling and measuring of trust, sociology explores the notion of trust as a property of groups and psychology describes trust as a psychological state. Other disciplines capture various nuances of trust which add to

Our problem is to decide on a means of measuring and expressing the level of risk a person takes (their trustability) when interacting with another anonymous individual or group of individuals in an online setting. In order to do this we need to understand the ways trust is built in a real world setting, so that we can model this within the social network. To capture the various nuances of trust that the academic disciplines work with, an interdisciplinary approach is called for. Solving the problem of trust in social networking is clearly a job that needs web scientists!

 

References

[1]          W. Sherchan, S. Nepal, and C. Paris, ‘A survey of trust in social networks’, ACM Computing Surveys, vol. 45, no. 4, pp. 1–33, Aug. 2013.

[2]          D. M. Rousseau, S. B. Sitkin, R. S. Burt, and C. Camerer, ‘Not so different after all: A cross-discipline view of trust’, Academy of management review, vol. 23, no. 3, pp. 393–404, 1998.

[3]          P. Beatty, I. Reay, S. Dick, and J. Miller, ‘Consumer trust in e-commerce web sites: A meta-study’, ACM Computing Surveys, vol. 43, no. 3, pp. 1–46, Apr. 2011.

[4]          S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina, ‘The eigentrust algorithm for reputation management in p2p networks’, in Proceedings of the 12th international conference on World Wide Web, 2003, pp. 640–651.

[5]          D. Artz and Y. Gil, ‘A survey of trust in computer science and the semantic web’, Web Semantics: Science, Services and Agents on the World Wide Web, vol. 5, no. 2, pp. 58–71, 2007.

[6]          T. Claridge, ‘Definitions of Social Capital’, Social Capital Research, 2004. .

[7]          L. D. Molm, N. Takahashi, and G. Peterson, ‘Risk and trust in social exchange: An experimental test of a classical proposition’, American Journal of Sociology, vol. 105, no. 5, pp. 1396–1427, 2000.

[8]          J. D. Lewis and A. Weigert, ‘Trust as a Social Reality’, Social Forces, vol. 63, no. 4, p. 967, Jun. 1985.

 

One thought on “Trust, a Cross Disciplinary View

  1. Robert Thorburn

    Recommendation trust being propagative and calculated within a predictable system (as far as any system with human agents can be) makes it an excellent candidate for our needs. Add to this the fact that beta testing introduces a time element, so trust builds up, and we have the ingredients for a functioning and demonstrably superior system come launch. Web Scientists for the win!

Leave a Reply