Leave-one-out

The technique used for the comparison is leave-one-out, a common technique in machine learning.

The process is the following. One trust edge (let's say from node A to node B) is deleted from the graph and then the trust metric is used to predict the value of trust A should place in B, i.e. the missing edge. The real value and the predicted value are then compared to compute a measure of error on this single prediction step. It is also possible that a trust metric is not able to compute the trust value and this refers to the coverage of the trust metric.

This evaluation step is repeated for all the edges and a global measure of error is computed, for example by averaging all the single errors and the coverage or by doing more deep analysis by considering only edges that satisfy certain constraints, for example, only on edges into nodes with a lot of friends, only into journeyer node or only for "master" edges, etc. What we are going to do really depends on the results that are not available at the moment.

see also Cross-validation