Scripts

The Scripts folder is a collection of utility written in python useful to manage and manipulate all the datasets and the data calculated with trustlet. Some scripts could be installed typing 'make' in 'trustlet' and 'scripts' folder. If you do this, this applications could be installed:
 * wikixml2graph
 * trustlet_sync
 * trustlet_syncrm
 * lsc2
 * diffc2
 * dot2c2
 * netevolution

Before we explain how work each applications and how work some other optional applications.

downloadDot.py
Utility script useful to download all dot files from trustlet.org/datasets and save it in a default path in your home directory, ready to be used in trustlet. This script works only for Advogato-like network (advogato,kaitiaki,squeakfoundation,robots_net) For more info see help of the script.

sync.py
This script syncronizes local datasets database with the remote database (on trustlet.org). It uses svn.

First of all it'll download missed datasets. Then merge them with the local version of them. - Only c2 file are mergerd. If a regular file yet exists on client it won't updated. Finally *all* changes will be committed. - Backup files (ends with ~) will not uploaded. - Files and directory that begin with + will not uploaded too.

main directory dataset: /shared_datasets svn hidden directory: /.shared_datasets

sync creates ~/shared_datasets and ~/.shared_datasets links

sync.py [basepath] [other options] (installed as trustlet_sync by Makefile in "Scripts" folder)

or

(syncrm.py path [other options] ) (installed as trustlet_syncrm by Makefile in "Scripts" folder)

- path can be a dir or a file

Deep explanation of work done by sync

 * Load timestamp of last sync execution
 * svn up
 * for each file modified by svn and by user
 * merge if c2
 * copy from server to client


 * for each file on svn
 * if updated from client and not from server
 * copy from client to server
 * if client version doesn't exists
 * copy from server to client


 * for each local file
 * add new files
 * copy updated files from local to svn dir


 * svn commit

wikixml2graph.py
This script creates c2 files from xml dumps of Wikipedia.

You can use pages-meta-current.xml.bz2, stub-meta-history.xml.gz or pages-meta-history.7z, downloaded from. If input file is page-meta-history this script will create also distrust graph.

Here there are some examples. Here there are other informations.

If your language is not supported you can add supported language using these functions:

from trustlet.conversion.wikixml2graph import addWikiLanguage,listWikiLanguage

that are useful in order to add and list the supported language.

links.py
Links.py is a script installed by makefile in /usr/bin (to install it and the other utility script you may type only "make") that is useful only if you want to partecipate in development of trustlet. The script simply create symbolic link from installation dir (usually /usr/lib/python2.5/site-packages/trustlet) to development dir (trust-metrics/trustlet). This is useful to implement modifications directly in trust-metrics/trustlet folder and test it on the fly

( if you change code in trust-metrics folder, automatically the modifications is made on the system, and you can test your modifications simply import trustlet )

The default parameter are just set. Type python links.py help or ./links.py help for other informations.

advogatoxml2dot.py
This script is useful to convert old advogato dataset (in xml don't standard format ) to standard dot format. In order to use this dot in trustlet, it must be re-converted in c2 dataset.

lsc2.py
This script that had to be installed in /usr/bin when you type 'make' in scripts folder, is useful to see what keys contain a c2 file.

import folder
Here there are some scripts to import datasets stored in a strange format (not standard) into Trustlet.

advogatoold2dot
This script convert into dot format the old format for advogato dataset. You can see the usage of the script calling it with the parameter help/--help/-h.

Import this

advogatoxml2dot
Import this

getwikixml.py
Useful to automatic download wikixml dataset from http://download.wikipedia.org. It tries to download links generated from input parameters:

http://download.wikimedia.org/ wiki//wiki--

With in input range,  in input languages and  in TYPES = [pages-meta-current.xml.bz2, pages-meta-history.xml.7z]

Type ./getwikixml.py --help to know how to set parameters.