In the industry the word trust frequently seems to be used for an appliance to figure out "Can I trust my user?" rather than the other way around, can the user trust her appliance or software. So for them implementing trust frequently means doing biometric checks on the person at the machine. Ouch.
In PSYC we use the term trust, web of trust, or even the trust chain (a food chain of trust), to intend how a person can be helped to decide whether it can trust another person, a software or machine acting on behalf of that person or even a machine acting on your own behalf. This is implemented by means of a distributed social graph in PSYC2 and in form of trust metrics in PSYC1.
What other people think
2010-03-20, Craig says, distributed trust is the coming killer app. Logical since craigslist only makes sense if you can trust the people selling you things, offering you housings etc, and vice versa. The entire web suffers from this problem of untrustworthiness. Update 2016: The Internet still has no functional distributed social graph!
How psyced implements this
PSYC1 has a rudimentary implementation of trust metrics, although not in all places where it belongs. It is employed when surfing the social network of friends using your web browser. Some commands also support mentioning a _trustee, a person that trusts you, that the other side might trust as well.
In the grand scheme of eliminating SPAM the web of trust becomes essential in creating the connections between the people. Once those are made (in the form of subscriptions etc), trust is no longer necessary. It's a kickstart mechanism in a world where strangers usually don't want to do you any good.
The /trust command
This is specific to the trust implementation in psyced:
Trust is one of those. You can specify the degree of trust you have for a person using the /trust command. 0 is no trust and 9 is maximum trust (you should only use 9 for other identifications of yourself, really).
The /show trust command tells you which trust degrees you have set which deviate from the friendship default of 5.
Think of all the data you have in your profile. You don't want any random people to see that, let alone national state actors.
Now think a stranger clicks on that address and would like to learn more about you. If you don't know this stranger, you only give as much information as you would put on a public homepage. Not much, if you are a careful person aware of the dangers of modern internetworking.
But should she bring up a trustee in her request, somebody that is a friend of yours and vouches for her, then you can show her more of yourself than you would to a total stranger. This is the mechanism PSYC's trust modeling provides. Let me describe that in step-by-step detail:
- She clicks on your profile link.
- She found it on a friends' profile, or otherwise knows a common friend of yours.
- Your server gets her request, sees she claims to be a friend of your common friend.
- Your server sends a request to your common friend's server to find out, if that's true.
- Your common friend's server acknowledges the two of you both being friends of his (at varying levels of trust, but you get the picture).
- Now your server combines the trust he has for her with the trust you have for him - end result is the trust you have for her, unless you one day specify another.
- According to the trust you now have for her, you give her as much insight into your privacy as you deem appropriate. Or rather, your server does it for you - but you could program it to do it in this or that way.
End result: All of your 2nd level social network can get in touch with you without you having to interfere manually, and still all your data is on your own respective servers, it's all open source and potentially encrypted (not yet, but hey no big deal.. then again, certification is a big deal sometimes).
This is what happens when you click on a psyc: link in PsycZilla, or you issue a /surf command.
Pragmatically speaking, when a source like a circuit has gained trust, it can do some remote control or send to places with trust requirement. In psyced there is an old trick to gain trust by using checksums. More elaborate thoughts follow.
Trustiness (a definition)
Trustiness is the trust someone feels at you.
current way to calculate the trust metric is the following:
- FriendOfFriendTrust = (TrustOfFriend * TrustForFriend / MAXTRUST)
normalized it would be:
- FriendOfFriendTrust = TrustOfFriend * TrustForFriend
naturally there are several simple alternatives which would perform as good as this one. still in my opinion its absolutely essential to make maximum trust values for "remote" friends possible to keep people from building proxys.
An experimental implementation of trust maths is in http://www.psyced.org/dist/world/net/entity.c decorated by humoristic #ifdef TRUSTINESS preprocessor blocks.
For pragmatic reasons more than others, the storage value of trust in psyced is a digit from 0 to 9. Trust could be defined as ranging from 0.0 to 1.0 where psyced simply only stores rounded values (see degree for just that). Right now it uses decimals from 0 to 9. 9 equals self. There is no distinction from not knowing someone or really hating someone. We probably should define unknown around 3 so that 0 is really our greatest enemy.
Although Trust and Remote Trust as described above are supposed to preserve your privacy, in the PSYC1 implementation the amount of trust you put in someone is visible to all your friends. Trust relations between your friends are visible to you in some special cases.
Problem as described in Newscasting#TrackBack. Idea:
13:59:11 lynx infacted: i thought all these traceback ping pong blabla protocols where intended to create those automatic comment entries that somebody linked you from somewhere 13:59:21 lynx infacted: so they can comment to your post on their own blog 14:01:27 20after4 says: oh ...it's used for [[spam]] so it's not enabled on psyc.us 14:01:29 20after4 says: I can turn it on ;) 14:05:04 lynx fragt: yeah that's right.. but arent there ways to fix the spam? 14:05:36 20after4 says: yeah ..well, spam filters can catch most of it 14:06:52 lynx infacted: hm.. we need the web of trust even for that kind of thing 14:07:16 20after4 says: yeah that's why psyc makes more sense for so many things ;) 14:08:06 20after4 says: but one point to blog comments and trackbacks is the openness of it 14:07:56 lynx infacted: what if someone sells his trust to spammers.. then we have to prune him off 14:08:40 20after4 says: well as soon as spam comes from someone each spam recipient can remove their trust in the spam-source 14:11:30 lynx infacted: alright.. so we have a trust platform.. how would it have to work. *think* 14:11:48 lynx infacted: i announce my blog story by wide [[friendcast]] 14:12:08 lynx infacted: someone reads it and posts a reply, providing the trust path with it 14:12:54 20after4 says: yeah 14:12:58 20after4 says: so far so good 14:13:00 20after4 says: ;) 14:14:16 lynx infacted: then you kinda need to check the trust path for validity 14:14:26 lynx infacted: and you're ready to go 14:17:16 20after4 says: sounds good. 14:22:19 lynx infacted: checking the path is a similar operation as what we do when we surf along and step by step work out the trust values 14:23:33 20after4 says: the issue with trackback spam is people posting loads of irrelevant crap in your comments via trackback. It wouldn't be as easy if they had to have a valid/verifiable identity
Web of Trust
This term is normally seen in conjunction with PGP. Since the authority of certification authorities is disputed, having a web of trust for encrypted communications that is based on our own judgements, not somebody else's, sounds like a good idea.
Since we need a web of trust, as described above, for several applications, anyway, PSYC2 uses it to also authenticate encryption, thus providing an independent alternative to X.509 commonly employed with TLS.