Difference between revisions of "Distributed Trust Network Proposal"

From Sharewiki.org
Jump to: navigation, search
Line 1: Line 1:
This document serves as a proposal for a new [[Digital Trust Networks|Digital Trust Network]]. Standard work-in-progress disclaimers apply.
This document serves as a proposal for a new [[Digital Trust Networks|Digital Trust Network]]. Standard work-in-progress disclaimers apply.
'''Update: This project is now under active development under the codename 'RainBeard'.''' Email bhopter at gmail if you're interested in getting involved.
'''Update: This project is now under active development under the name 'rainbeard'.''' This page is now out of date. Go [http://raincode.org/projects/rainbeard/wiki/Overview here] for the up-to-date version.

Revision as of 21:40, 16 June 2011

This document serves as a proposal for a new Digital Trust Network. Standard work-in-progress disclaimers apply.

Update: This project is now under active development under the name 'rainbeard'. This page is now out of date. Go here for the up-to-date version.


Trust 1.0

The existing ecosystem of digital trust networks is dominated by what we will refer to as Trust 1.0 systems. These systems involve a single, publicly-accessible profile for each agent that reacts to karmic input by other agents. The Couchsurfing network is an example of such a system. Each user's profile contains a comment wall containing positive and negative feedback from others. The nature of this feedback varies, and the only quantifiable metric is a drop-down menu stating whether interactions with this person have been positive, negative, or neutral.

Shortcomings of Trust 1.0

The Couchsurfing reference system has experienced wide adoption. Unfortunately, there are several fundamental problems with the Trust 1.0 approach that limit its usefulness.

  • Shyness - People are extremely disinclined to give negative feedback, because the feedback is directly visible to the person in question. No matter how it's phrased, a negative reference invariable comes off as a low blow and sours whatever was left of the relationship. Leaving negative feedback is also risky, because other parties tend to retaliate with (often fabricated) negative feedback of their own.
  • Objectivity - Trust 1.0 systems assume that there is a single and objective portrait of a person to be portrayed. Even my enemies tend to have friends who will write positive things about them. Cases of this range from intentional deception (a ring of burglars vouching for each others' trustworthiness) to differences of opinion (you and I disagree on whether Sarah's partying is positive or negative).

Trust 2.0

Trust 2.0 does away with the notion of objectivity and attempts to mimic human social memory. Suppose I move to a new town and want to know who I can trust. I can't go to the town hall and read a profile of each resident to inform my decision. So instead, I befriend a few people and use them as a resource to evaluate others. Can I safely lend my car to Nick? Do I want to go on a road trip with Rachel? These are questions I would pose to my confidants. I then use these data points to form my own opinion, taking my feelings on the source into account. For example, I might trust Steven with my life, but I might also feel that he's more easygoing than I, and take his recommendation to travel with Rachel with a grain of salt.

The key idea here is that my connections in the social graph shape the information that I receive. To obtain useful data, there must be people whose opinions I value who are also willing to share these opinions with me.

Phase One - Core

The core of the system is a simple social networking site. Each user has a spartan profile (no need for a lot of information). The two core workflows are adding connections and querying information.

Adding Connections

Alice can add a basic connection to Bob without Bob's knowledge or consent. This information comes in the form of tags. For example, Alice can find Bob on the network, and apply the tags 'trustworthy', 'extrovert', 'reliable', and 'altruistic'. She can also increase or decrease the weight of these tags depending on how strongly she feels.

After making the basic connection, Alice has the choice of upgrading her connection to Bob so that she may receive information from him about others. She first clicks "Ask Bob to share his opinions", and also answers the question "I would give Bob's opinion of someone X% of the weight of my own opinion". Bob will be notified of the request, and be given the option to accept or ignore. If he accepts, Alice can query Bob's opinion on a third party at will.

Querying Information

Suppose Alice is considering buying a car from Mallory. She's never met Mallory, but she's friends with Bob, who knows Kent, who does know Mallory. This information is then delivered to Alice, multiplied by the sequence of uncertainty factors encountered as the information travels. For example, if Kent is 90% sure that Mallory is untrustworthy and Bob gives Kent's opinions 70% of the weight of his own, he will tell Alice that he's .9*.7=63% sure that Mallory is untrustworthy.

As more connections are added to the social graph, the number of accessible data points increases exponentially. Alice can continue to ask for opinions individually, or she can inspect a summary view from all of her connections where each data point is modulated by the weight Alice gives to that connection.

The idea is to give a mechanical boost to real-world social memory. In theory, Alice could call all of her friends to ask what they know about Mallory. These friends could in turn call their friends, creating a chain reaction where everyone in town forms an opinion about Mallory and shares it with certain people. However, this is impractical in large communities. So instead, we want to make a larger volume of information available without fundamentally altering (as Trust 1.0 systems do) the dynamic of how it's transmitted.

The Adoption Problem

The obvious problem with such a system is that there is no clear path to widespread adoption. Like many social networks, the utility of such a system depends on the density/sparsity of the social graph. Without a large user base, there's no compelling reason to make yet another account on yet another website. If nobody makes an account, there will never be a large user base.

Phase Two attempts to solve this problem.

Phase Two - Integration With Existing Social Networks

This phase involves integrating the system with existing social networks (Facebook, Couchsurfing, Diaspora, etc). There are two goals:

  • Facilitate users in adding connections - The prospect of tracking down the profiles of all of my friends in a new system is a daunting one. A Facebook plugin could let me quickly and easily add connections to my existing friends.
  • Allow out-of-network connections - If I want to add positive or negative feedback about someone, I can't be sure that they've signed up for an obscure trust-network website. If they haven't created an account on the trust system, I can still reference them by their unique identifier on another social networking site. This means that I can leave and query feedback about [email protected] and [email protected] long before either has ever heard of the system I'm using. Once they sign up, they can claim their various external identities and coalesce them into a single profile.