Inferred Trust Ideas

These are some ideas started by Callum Macdonald. I have no academic background in this field, so there may be large bodies of work which I am not familiar with. Take these ideas with a pinch of salt! :)

When calculating inferred trust, I think trust systems should track two values. How much node A trusts node B and also how much node A trusts node B's judgement.


 * yep, it makes total sense, most of the paper suggest this as well. Check the pages I just created and feel free to improve them ;-) Trust in characteristics, Trust in ability to express valuable trust statements and Gandhi paradox. --PaoloMassa 09:24, 24 September 2007 (PDT)

Some examples to clarify this point:
 * I trust Ben 90%, but I believe his choice in people is poor so I trust his judgement of others only 60%.
 * I trust Jenny only 50% but I think she's an excellent judge of character so I trust her judgement 80%.
 * I trust Bobby 80% and I think he's a reasonable judge of character, I trust his judgement 80%

Or in a technical system:
 * Server A has received 90% good data from Server B, so A trusts B 90%. However, Server B's recommendations have been poor, so A trusts B's judgement only 50%.
 * Server X receives predominantly poor data from Server Y so X trusts Y 40% but Server Y has provided excellent recommendations so X trusts Y's judgement 80%.


 * Wow, Callum, your English is 250% better than mine ;-( These example are perfect for explaining the concept. Feel free to edit the previous pages for making them more understandable. And thanks! --PaoloMassa 09:31, 24 September 2007 (PDT)
 * Hahaha, feel free to copy my explanations anywhere else! :) Callum 09:41, 24 September 2007 (PDT)

The crux of this approach is to separate trust and trust of judgement.

Calculating Trust
To calculate trust we use trust of judgement multiplied by referred trust.

For example:
 * A trusts B 90% and A trusts B's judgement 80%.
 * B trusts C 60% and trusts C's judgement 50%.
 * A trusts C 80% (trust of judgement) x 60% (referred trust) = 48%
 * A trusts C's judgement 80% x 50% (referred trust of judgement) = 40%

Taking this example to the next level:
 * C trusts D 80% and trusts D's judgement 70%.
 * A trusts D 40% (calculated trust of judgement) x 80% (referred trust) = 32%
 * A trusts D's judgement 40% x 70% (referred trust of judgement) = 28%

The diagram on the right shows this simple example.


 * While many researchers propose this as well (multiplication of trust), I think it does not make sense. Consider the following example.

digraph G { rankdir=LR; Alice -> Bob [label="trust = 0.555"]; Alice -> Bob [label="trust as judger = 0.1"]; Bob -> Carol [label="trust = 0.9"]; Bob -> Carol [label="trust as judger = 0.555"]; Alice -> Dave [label="trust = 0.555"]; Alice -> Dave [label="trust as judger = 0.9"]; Dave -> Eve [label="trust = 0.1"]; Dave -> Eve [label="trust as judger = 0.555"]; }
 * There is one path Alice/Bob/Carol (0.1 -- 0.9) and another path (Alice/Dave/Eve (0.9 -- 0.1). They are inherently different no? So they should be treated differently. What I propose is to do weighted sums and to put thresholds on minimum trust as judger values. --PaoloMassa 09:44, 24 September 2007 (PDT)
 * I disagree. I think the two paths are the same. Whether you trust someone's judgement at 0.1 or they trust someone else at 0.1 comes to the same thing. There's a fundamental lack of trust. I don't see any difference between the two scenarios. Callum 10:11, 24 September 2007 (PDT)
 * I (Alice) trust Bob as 0.1 (distrust, consider him unreliable, a spammer). Bob trusts Carol as 0.9 (a lot). What is most meaningful from Alice point of view? To not rely on what Bob says. Otherwiser I could be influenced by someone that I consider a nearly-spammer. Let us support you trust Naz (random name of course...) only 0.1 and that Naz, knowing this, will iussue a trust statement in Kasp of 0.00001. Then you get a predicted trust for Kasp of 0.000001 so you are relying on the opinion of someone that you don't trust. Moreover by multiplying you simply apply a trust decay which is not reasonable: just because is not close to you in the cosial network (a fisherman in Hanoi) does it mean you should distrust her? I think not. The other case is totally different. I (Alice) trust Dave a lot (I rely on his judgements, I think we are likeminded etc) and Dave states that he considers Eve a bad user (a lier,  a spammer, a very very very bad girl...). Now this time I should rely on Dave's opinions and predict a trust in Eve of 0.1 (not 0.9 * 0.1!). Check Trust metrics on controversial users: balancing between tyranny of the majority and echo chambers for another example with a simple network (hopefully clearly explained) --PaoloMassa 10:30, 24 September 2007 (PDT)
 * Agreed. I think this is negative trust which I see differently from positive trust. However, I have not yet figured out how that can be multiplied, negative multiplications don't quite work! Callum 12:55, 24 September 2007 (PDT)
 * Copying from Trust metrics on controversial users: balancing between tyranny of the majority and echo chambers (check the paper, around page 8, nothing mindblowing but in this way I don't have to rewrite it here ;-) --PaoloMassa 10:22, 24 September 2007 (PDT)

The predicted trust score of a user is the average of all the accepted incoming trust edge values (representing the subjective judgments), weighted by the trust score of the user who has issued the trust statement. (...) The reason for accepting only trust statements from users whose predicted trust is greater or equal than a certain threshold is the following. Users who have a predicted trust score below the threshold are users that MoleTrust predicted as untrustworthy (from the point of view of current source user). So their opinions should not influence the predictions about the trust score of other users and the best possible action is simply to not consider their trust statements. In fact, this precaution avoids situations in which the trust score of an unknown user depends only on statements issued by untrustworthy users. In the example of Figure 3, Carol is the only one to have expressed a trust statement on Ivan. If MoleTrust was to consider the trust statement expressed by user Carol, it would predict a trust score of 1.0 for Ivan but this predicted value would have been derived only from the opinion of a user, Carol, with very low predicted trust and hence from the opinion of an untrustworthy user. Moreover, if Carol knows or guesses its predicted trust score from the point of view of user Alice, she has incentives into providing trust statements with the unique goal of influencing the trust scores predicted by MoleTrust on behalf of user Alice. For example, user Carol could express a trust statement in all the other users with a value of 0.01, in order to nuke the reputation of all the unknown users, or could boost just the reputation of chosen users. In short, an untrustworthy user would be able to influence the predicted trust score of other users, a situation a trust metric should be able to prevent.


 * Moreover, I created this graph using Graphviz, there is a plugin isntalled in this wiki so that it could be easier to create graphs quickly, to discuss them and even to change them collaboratively. Hopefully this is more practical thatn creating images and uploading them. Check the text of this wiki page and feel fee to try graphviz on this page or at TrustLet:Graphviz examples. --PaoloMassa 09:44, 24 September 2007 (PDT)
 * Ooh, I like Graphviz, very nice indeed! Callum 10:07, 24 September 2007 (PDT)

Also see Negative Trust for more about calculating trust by simple multiplication.

Philosophy
I believe that this model of representing both trust (context specific or general) and trust of judgement as separate values accurately reflects "real life trust". Human trust is a multi-faceted concept which cannot be easily tracked. A great deal of trust is instinctual, based on a vast history of past experience, and extremely hard to quantify scientifically.

I believe this model presents an improvement over a simple "I trust A x%". This model applies context to trust itself. The model could even be taken a step further to context specific trust. For example, I trust Bob's judgement of restaurants, but not of cars. I trust Bob's judgement of lawyers but not accountants. I trust Bob's judgement of animals but not people. And so on.