Attacks on trust metrics

In general on decentralized environments everyone is free to create as many identities as she likes. These puppet identities can be used to game the system, for example to acquire a higher reputation. A trust metric and a system is attack-resistent if it is able to cope with this situation.

= Possible Attacks =

Sybil attack
Describe briefly the Sybil attack.

= Strategies for coping with attacks =

Spotting malicious users is one of the goals of a trust metric.

By Malicious users I mean: spammers, eBay frauder, ...

Malicious users is not the correct word and anyway they are not global. The users I consider malicious can be totally different from yours: you might consider malicious Bush, I Kerry or Nader; you might consider malicious Fox news, I indymedia, ...

There are no globally bad peers! Everything is subjective!

-

for the paper:

Do we model attacks on distributed systems in general (p2p, the web, internet)? only to trust metrics/reputations systems (in this case we don't deal with the transport layer)? do we consider also recommender systems (a simple attack is just copying target's profile in order to become the closest neighbours)? anything else? Do we deal with psychological attacks? (seeing that a lot of people like "Mary81", can make you think that he is "good") the great scam is a funny (real?) long story about what can you achieve by letting other online people think that you have many supporters (written cloudly so you don't loose the good of reading it) http://static.circa1984.com/the-big-scam.html

-

Spontaneous trust
From http://web.archive.org/web/20010126122100/http://www.exocortex.org/p2p/spontaneous-trust.html

The Classic Eve Client Attack
This attack doesn’t have much of a chance in this network. As soon as it violates the code of conduct it will be shut down.

The Instant Refill Team Attack
One more complex method of attack could involve a client that acts as a front for other clients. In one scenario a front client could develop trust for a partner in an attack really quickly. Once trust is developed the partner could then use that trust to abuse the network until it losses it. Then it returns again to the front client with a new pseudonym and the front client instantly creates another perfect trust relationship with it. The partner then goes and attacks the network again.

This type of attack can be reduced if when we destroy the trust of a rogue client we also decrease the trust of the client that gave the rogue client a high trust level. This type of punishment may be able to be propagated through a network by which the eve client’s trust was determined. [ie. backward propagation]

Within Code of Conduct Attacks

Although we can define codes of conduct that nodes have to follow they may stretch their behaviour to include as much extra activity as possible while still remaining a decent member of society. This is equivalent to speeding when you know that there are no cops around. Thus far there is no reliable way to counteract these types of attacks but it should be said that within code of conduct attacks should be fairly benign compared to the attacks suffered in unprotected networks such as Gnutella.

Gossiping about Fake Reputations

This is really a slight modification on the Instant Refill Attack. In this attack a node will… [Gossiping doesn't seem like a great feature, I'll probably remove it]

Actually, gossip is one of the oldest forms of a trust metric, but one is was often accidentally or purposely misused. Variations include slander, character assassination, matchmaking, lobbying, propaganda, blackballing, shunning, informal consensus, etc.

The Strike and Recharge Attack

A node can simply build up trust by waiting for a period of time and acting within the specified codes of conduct. Soon as it develops enough trust it can quickly do an attack, lose its trust and then wait again. This type of attack will be conducted in brief spurts with long waits between them but it will be an attack none the less. Currently no solution to this style of attack is known.

-

study attacks to ebay and to slashdot:

you can always but an user with good slashdot karma somewhere online ... the same with every online community: this is interesting because it is possible to always assign a monetary value to reputation in online communities, and to discover the GDP (PIL) of the online community (i.e. : how much richness they generate)

---

See the threat methods here http://www.peerfear.org/fionna/index.html

See some interesting use cases for trust metrics here http://www.peerfear.org/fionna/use-cases.html

Distributed Denial Of Service (DDOS): the attacker tries to overload the network so that a peer cannot retrieve information from other peers. The best way to cope with this is to wait. If the information you can get are unreliable, you should not place any confidence in them and so you should not let an algorithm infer trust levels.

--

someone with a lot of *identities* sells her services for money: she can issue a lot of "negative" trust statement to the wanted target. See "Game Theories - On-line fantasy games have booming economies and citizens who love their political systems. Are these virtual worlds the best place to study the real one?" By Clive Thompson http://www.walrusmagazine.com/04/05/06/1929205.shtml

On the game simsonline we see such a "business". *The Sim Mafia was founded by Jeremy Chase, a twenty-six-year-old in Sacramento. Players who want to destroy another character's reputation turn to the mob. The game has a system of black marks for punishing bad behaviour. If Chase is paid to "tag" someone, he gets his crime family — a loose collection of a hundred players — to place dozens and dozens of red tags on the victim. When they're done, other players will assume the character must have done something awful, and refuse to speak or trade with him.*

The sim mafia is at  http://www.thesimmafia.com/home.htm

And as cybernetics teaches, something to balance the mafia just appeared: the Sim Shadow Government is a collective of sims users that gathered and tries to punish bad behaviour (and in a sense to define what is allowed and what is forbidden) http://www.simshadow.com/main.php?sg=home

Read about this at http://www.overmorgen.com/archive/2003/06/18/sim_mafia_and_othe.php

There is also inflation in the virtual world http://news.bbc.co.uk/2/hi/technology/2345933.stm

less related but interesting to me: people were trading everquest artifact on ebay ... sony initially fought the idea but now it is running the auction site (taking a part of the money) http://www.wired.com/news/games/0,2101,67280,00.html

-

section 3 of "Reputation" by Roger Dingledine, Michael J Freedman, David Molnar, David Parkes, Paul Syverson at http://www.scs.cs.nyu.edu/~mfreed/docs/reputation.html discusses some attacks to reputation systems.

--

this is a sort of self-attack: herd behaviour or information cascade

"Agents find it in their interest to copy behavior. Informational cascades often have idiosyncratic outcomes where everyone ends up doing the wrong thing — the blind following the blind."

see: http://en.wikipedia.org/wiki/Informational_cascade

see: http://welch.som.yale.edu/academics/journalcopy/1992-jpe.pdf

see: http://ist-socrates.berkeley.edu/~kariv/Research.htm

see: http://opus1.org/others/scholarlypapers/cascades.html

in (social) networks, the preferential attachment property is invoked to explain why networks are scale-free (see barabasi's Linked), i.e. it is more probable I'll link to boingboing.net because a lot of people already linked to it than to randomsite.com but this does not mean I'm doing the best action (boingboing can be a so-so site).

--

a googlebomb is an attack? http://en.wikipedia.org/wiki/Googlebomb

a citebomb?

the slashdot effect, while usually an accidental DDOS, can also be triggered intentionally. --

social cost of cheap pseudonyms shows that you cannot trust unknown people (because they can potentially be attackers) ... email spam is a demonstration, no?

--

On Wed, May 26, 2004 at 05:28:41PM +0100, Farez Rahman wrote:

>> I'm looking for info/papers describing threat models that are relevant >> to trust/reputation systems, both centralised and decentralised. >> >> Does anyone have any pointers to this?

John Douceur, The Sybil Attack - how you can use multiple identities; or "re-use" identities.

Paul Resnick et al, "Reputation Systems" - a good overview of reputation systems

Mary Calkins, "My Reputation Always Had More Fun Than Me: The Failure of eBay's Feedback Model to Effectively Prevent Online Auction Fraud" - a survey of problems on eBay's feedback system. (She's a	lawyer)

Tuomas Sandholm, "(Im)possibility of safe exchange mechanism design" - some ideas about how you can('t) use reputation to solve some problems.

Dan Wallach, "A survey of Peer-to-Peer security issues"

Raph Levien, "Attack Resistant Trust Metrics"

And my own work: Andrew Clausen, "Online Reputation Systems: The cost of attack of PageRank"

Are you planning to do research/development in this area?

--

good list

http://www.i2p.net/how_threatmodel#sybil

see http://ieeexplore.ieee.org/iel5/8713/27586/01231420.pdf?arnumber=1231420