Trust Metrics Evaluation project

From Trust metrics wiki - Trustlet, a free, collaborative project for collecting and analyzing information about trust metrics.
Jump to: navigation, search

The Trust Metrics Evaluation project.

Short version for busy people:

"If I trust Bob and Bob trusts Mary, should I trust Mary? How much?" In the internet era, finding algorithms to answer this question is an interesting and compelling research topic..


Contents

Goal of the project:

Many people have proposed trust metrics. The goal of this project is

  • to bring all of them together
  • to review them
  • to explain them in an easy way
  • to code them
  • to release all the source code under GNU General Public Licence (GPL)
  • to test them on the same data
  • to understand in which situation a trust metric is better than another

Being on a Wiki it is of course a collaborative work.

If you feel that something is missing/wrong/not clear, please change it!


How will we collect the many trust metrics?

We ask proposers to send us the code of their proposed trust metrics. If it is not possible to obtain a trust metric in this way, we will implement it based on the papers/web sites describing it.

Precisely, what will we compare with trust metrics?

The easiest answer is "everything". We could really compare trust metrics against everything. Picking from the "everything set", we could have:

  • The input and the output
  • Personalization level: some metrics provide local recommendations ("this peer is good for you"), some provides global recommendation ("this peer is good", i.e. it is in the Top40 global list)
  • Time performances
  • Complexity of the mental model and easiness for average user to understand the model (input, output and internal computation)
  • Possible suitable visualization techniques
  • Accuracy: this is a tricky field. A standard way in machine learning to test algorithms is leave one out techniques: you suppose you don't know a value (a certification in this case), you try to predict based on all the other known values (the other certifications) and you compute the error you made in predicting (the difference between actual value and predicted value). There could be better way of course and you can suggest new one in AccuracyEvaluation [wiki]
  • TrustMetricCritera
  • you can add a new one here

What can be the input?

There is no clear consensus of what a trust metric is or what it should do. This means that different people would probably propose different kind of input for a trust metric.

We made the following choice: the input of a trust metric is a set of trust statements (a direct certification of a user about another user, such as "Paolo: Cory is my friend" or "Paolo: I trust Ben as 7/10"). This results in explicitly provided social networks for every user. Datestamping each certification would make it possible to take into account user history (Paolo liked Doc 1 month ago but now he hates him). Of course, not all metrics consider time.

We are collecting the different datasets: you can add a dataset you already have and even just the suggestion about a dataset you would love to analyze and dig into. Be bold!

The output?

"I noticed you don't know Kevin, but I think you should trust him as 9.4/10".

This could be the output of a trust metric. Essentially, a trust metric propagates trust certifications to "unknown" peers and outputs the predictions that can be made. There can be also an explanation of the prediction/recommendation but not all the metrics provide it.

The code

See Code

Who is working on this project?

More information

Personal tools