On Trust (part I)

 Posted by (Visited 29896 times)  Game talk
Feb 042006
 

Trust is a big topic. There’s a number of definitions out there related to different domains; everything from the famous Ronald Reagan “trust, but verify” dictum applied to international arms treaties, to “trusted computing” which is not about whether something is trustworthy, but whether it behaves predictably from a software/content provider’s point of view (and not necessarily the user’s!).

In general, most people tend, of course, to assume that “trust” means what the dictionary says: someone or something on which you can depend.

But let’s take a look at what the first six definitions in the dictionary actually say:

  1. Firm reliance on the integrity, ability, or character of a person or thing.
  2. Custody; care.
  3. Something committed into the care of another; charge.
    1. The condition and resulting obligation of having confidence placed in one: violated a public trust.
    2. One in which confidence is placed.
  4. Reliance on something in the future; hope.
  5. Reliance on the intention and ability of a purchaser to pay in the future; credit.

Common threads emerge in these definitions: a looking-forward into the future, a sense that something of value is being held by someone else, and that somehow integrity and character are related to these two factors. In short, that integrity can be measured as whether future interactions with someone will damage something you value.

This cuts to the heart of what was being discussed in the post and resulant thread on reputation systems, which sometimes go by another name in social software circles: “trust metrics.” Perhaps the best-known advocate of truly robust trust metrics is another Raph, whose work I first became aware of years ago when researching reputation systems.

Using Neil Gaiman’s metric of “who is ranked higher on Google,” I’m Raph #2, and he’s Raph #3. There’s a Raph #1 as well, of course. This may seem like a total tangent, but it isn’t, because Google’s PageRank system is (roughly) based on reputation votes in the form of links to a site. The more inbound links, the higher-ranked you will be. Effectively, Neil Gaiman is Neil #1 and is considered to be the prime authority on Neilness, beating out Neil Young. Here Google is leveraging the social reputation of a vast network in order to build a trust metric. The thing of value is your time, and the expectation of future interaction is whether or not the page you get when you hit “I’m Feeling Lucky” is actually what you want.

A while back I attended a conference where the theme was centered on trust. The proceedings of that conference are sealed, but an official report was produced. I ended up going off on a massive mental tangent during that conference, exploring my own mental constructs regarding the words “trust” and “authority,” and thinking about it in terms of what it might mean for virtual spaces. Bear with me, because this gets a little bit abstract, and even a bit idealistic, as we go.

Some basics that I take for granted
Let’s take the following as a truism: we’re more likely to behave nicely to people that we know we’re going to see again. There’s a body of literature surrounding this, and it’s not anywhere near simple — a number of factors play into it from many different angles. We’re also more likely to behave nicely towards people who look like us, exhibit common social signals as regards social class, and who are exuding some sort of authority. But broadly speaking, the literature indicates that an expectation of future interaction has very different results than if you don’t expect to see the person again. The classic depiction of this is Axelrod’s iterative tit-for-tat example.

It’s quite straightforward to connect this to discussions of Dunbar’s Number, aka “the monkeysphere.” So let’s take that as another truism: most people only have repeated significant interactions with a limited number of other people.

We can then turn around and think about the notion of “community standards.” These days this tends to mean a sort of terms of service document imposed from on high, but that’s actually exactly the opposite to what I mean: instead, think of it as “the standards the community imposes on itself.” These can be summed up by another word: culture.

The words culture, cultivate, and of course cult all come from the same Latin root, and this is no accident. We think of cults as brainwashing, but that is also what cultures do. They cultivate the group (get it to grow and grow strong) by inculcating (from the Latin root meaning “to force” and “to trample”) certain practices and worldviews. Deviation from these practices is punished, often by removing the support of the group from the deviant individual.

What we have here is the core of the term “peer pressure,” and also the core of the notion of “community policing.” In a group of appropriate size, as long as there is a culture established, you will see fellow members of the society enforcing the implicit rules of that society. People who deviate will be maligned, ostracized, and eventually (should there be a form of authority with this power) ejected.

This is only possible because of the promise of future interaction. This is perhaps best illustrated by recent events here on this blog. A culture has been established (and in fact, I’ve worked over years to establish it, via the manner of my participation on many different forums). Certain types of attacks and commentary isn’t welcome within that culture for a variety of reasons. When people showed up contravening that unspoken rule, other blog regulars started openly putting down the offenders. Notably, the authority (meaning me) didn’t. Some offenders promised to reform purely so they could maintain their access to the group resource: namely, the blog. Others persisted in their behavior. Eventually, the outcry was enough that the authority (again me) stepped in and enforced the unspoken rules and made them explicit. In effect, communitarian policing led to the creation of a “law.” The folks who joined but were disruptive were either normed into the group or were ejected via ostracization.

The “law” is interesting, because at that point, we move out of the realm of communitarian enforcement and into the realm of “authority-based” trust. At that point, there need not be any expectation of repeated interaction. You have a level of trust that the guy in front of you in the checkout line is not going to turn around and stab you in the eye with his Snickers bar because of mediated trust — you are relying on the authority above the both of you to keep that from happening.

Obligatory game-related point to make: This is in fact why in my community relations policies manual at SOE I recommend not having a “general game discussion” forum — by creating a community whose natural shared interest is so large, you rapidly grow beyond Dunbar’s Number, and thus rapidly create an environment where you must appeal to authority in order to get a civil environment. In other words, an overly large forum or game will have greater policing problems, because smaller groups are more effective at policing themselves via peer pressure. This is why the SWG forums were partitioned into so many smaller groups.

I also believe this is why the level-based segmenting of player populations into cohorts and “cozy worlds” is such a powerful dynamic, and why levelless systems have to find a compensating mechanic. Dunbar’s number is about our support system, not just about our knowledge sphere. Being outnumbered by the big bad world is uncomfortable. A big anonymous world is not as emotionally satisfying as a tight-knit community of reasonable size.

And all of that just covers the very first sentence in the notes I set out to transcribe. We have seven pages of notes to go! At this rate, I’ll finish in 2008.

Still to come in part 2 (and beyond, if need be):

  • “The relay problem” with authority appeals
  • “Trust is not transitive, but mistrust always is.”
  • Can we architect communitarian models that resist homogeny?
  • “The more distant the authority, the higher the burden of proof is.”
  • Is there such a thing as bottom-up authority?
  • What’s the difference between trust and faith?
  • Yes, of course we will talk about LambdaMOO!

  32 Responses to “On Trust (part I)”

  1. Blogroll Joel on Software Raph Koster Sunny Walker Thoughts for Now Sex, Lies and Advertising

  2. You sir would make a good Brit, where a natural appreciation for irony is a pre-requisite!

  3. Your first paragraph can be summarized as “Familiarity Breeds Trust”. That doesn’t include authority issues, and probably a host of other things, but those three words put together a lot of it. =)

  4. Your first paragraph (where it begins “Let’s take the following as a truism”) can be summarized as “Familiarity Breeds Trust”. That doesn’t include authority issues, and probably a host of other things, but those three words put together a lot of it. =)

  5. Interesting, to say the least.

    As my son pointed out while I was reading this and discussing it with him, the online limits to the size of a community may be considerably lower than Dunbar’s number due to each individual’s need to maintain relationships outside the online medium. Since the online community only represents a fraction of the individual’s “monkeysphere,” and it’s relatively easy (relative to communities based on physical proximity, anyway) to change online communities, I wonder if online communities have more churn? How does that affect trust?

  6. I am not sure that familiarity breeds trust. Familiarity is perhaps an indicator of repeated past interactions, which then results in a decent predictor of future interactions. But we also know the stories of “burning bridges” — people who are familiar and yet act in ways that break our trust in them because they have chosen to sever past ties.

    Aufero, good questions. I am pretty sure online communities do have more churn, but it would be very interesting to see real data on that.

  7. Ok I think you stopped a little short however as to why large anonymous worlds aren’t as well recieved as tight knit communities. It gets into the realm of expectations of reciprical support.

    I help you, you help me. That I can rely on to a point. If you fail to hold up your end the trust is broken and I never help you again, its easy to cut you off because there is only us.

    However as I help more people (X number) it gets much harder for me to keep track of transactions. You also end up in conflicts, I helped Y and Z but they have conflicting goals so now I have to choose. The interconnectedness gets harder and harder to maintain as the numbers grow larger, hense why folks maintain smaller communities. In the end its all about the ‘Me’ however and what the ‘Me’ expects from those around that owe ‘Me’ (owed as in actions done or actions that may be done in the future).

    Thats a very nutshell view of the various social theories on the matter just from what I remember from my college days. A full discourse on it really precludes a blog honestly.

  8. Churn

    What would contribute to higher churn in an online community is ease of entrance and exit. It’s harder to leave a community when you can’t go more than ten steps without bumping into one of them. But if all you have to do is log off…

    Familiarity

    And I was a bit off-base… maybe it’s more accurate to say that familiarity breeds expectations, and some expectations are trustworthiness. Behavior patterns?

  9. “Familiarity breeds trust” doesn’t sum it up at all. For instance, if you move from Los Angeles to a town of 40 people in Montana, you *will* treat your neighbors nicely on your very first interaction with them, though you’ve never seen them before. Because you know you’ll see them again. It has nothing to do with familiarity.

  10. Hm… yes, you’re right. I think I made a subconscious connection and expressed it completely wrong. Expectation of familiarity, perhaps? The term is important, I’m sure of it; it’s just HOW that’s befuddling me now.

  11. You may have noticed that even when forum managers (in games or otherwise) try to keep a “general forum” from forming that it tends to spontaneously erupt in some other space. You may have also noticed that attempts to engineer a fit between a designed space and a tight community also tends to be frustrated over time by the creation of information flows and communication channels that interlink much larger and less trust-centric social networks on a global scale.

    You’re missing something here, perhaps because you’re focusing too exclusively on trust. Let’s put it this way: there is something that pushes against tight-knit communities of reasonable size and the kinds of futureward forms of trust they engender.

    The old gemeinschaft-geschelleschaft distinction may hold promise in understanding what pushes against attempts to restrict communities to some allegedly natural or intrinsic “appropriate size”. Small face-to-face communities all around the world have given way to large anonymous ones in the last two centuries. There are a lot of reasons for that, many of them having nothing to do with the will or volition of individual human beings or communities of human beings. But some of the reasons do lie in what people want, what they desire.

    In part, they lie with a certain form or conception of freedom; that the very things which produce “trust” as you define it also produce intimate tyrannies, restrictions, forms of domination. Many people seek global anonymity in order to be free of constraint–and to choose their communities rather than have those communities chosen for them.

    And there is a “culture” that is vital and powerful at that larger anonymous level of sociality, even a form of “trust”. But the trust which can exist at that scale is a completely different thing. For one, it depends vitally on the free flow of information throughout the entirety of the social space, not on fear or anticipation of what particular individuals might think of you as an individual tomorrow. So every time a community manager tries to confine people who know they’re in a larger whole to what they judge to be some appropriate sub-community of interest, people seek to tear down the barriers to the flow of information across the global scale of community.

  12. From a practical point of view, though, Tim, it doesn’t matter if the larger forum erupts elsewhere. In fact, more power to it if it does. As long as you as a service operator have smaller communities that you can still effectively communicate with, and that you can police at a reasonable cost, you’re still gaining the benefits of running forums.

    I agree that there are many social forces that pushes against staying at the smaller size. That’s a separate issue from the issue of whether a given particular service provides forums of a given size.

  13. In a group of appropriate size, as long as there is a culture established, you will see fellow members of the society enforcing the implicit rules of that society. People who deviate will be maligned, ostracized, and eventually (should there be a form of authority with this power) ejected.

    This is exactly why RP-MUSHes work.

    Though, unless someone is exploiting code, cheating, or harassing other players, ejection is largely voluntary. When the very fabric of your existence in the game is pegged, to some degree, on your participation in the culture, the game will cease to be of interest rather quickly if you can’t adhere to its norms. After all, you can’t roleplay solo.

    As you might imagine, the Dunbar limit is extremely evident in RP MUSHes. Once the player base exceeds a certain size, it will by necessity fracture into various social strata and cliques. I’d say that the stable state for an RP-MUSH is well below Dunbar’s 150 — or even below the UO 60.

    However, RP MUSHes also need a mixing-bowl type area (or “main stage,” as I call it), to foster interplay (and create tension) between the factions, and provide opportunities for dramatic confrontations. The Crossroads tavern kidnapping in A Game of Thrones is precisely the sort of thing a main stage is for.

    I think this sort of interplay happens naturally in most MMOs, due to trade, resource competition, and other factors. During these interactions at the interfaces between factions, we are standard-bearers. Our outside interactions affect the reputation of our entire faction. Thus, it’s possible to not just ostracize someone from your group, but to pressure other groups to ostracize someone, as well. (You will sometimes see on the WoW forms, for example, people pressuring guild-leaders to boot known ninja-looters.)

    So, guilds/factions/cliques can act as atomic players in terms of future-interaction, on a whole different scale from individual players. And this could expand to higher tiers, as well. (We are at war with Nation X, which holds Guild Y, which holds Player Z. It doesn’t matter if Z is the nicest guy on earth. We’ve never met Z, and he’s just an X to us.) When there are too many people to know, we organize our trust through abstraction.

  14. What I’m suggesting is that the larger forum will erupt within the space of the smaller communities that a manager is trying to gerrymander into existence–that if you fail to provide one on the grounds that you don’t want that larger forum, you’re going to get it anyway, and then have to invest serious effort constantly slapping it down. You cannot manage the desire for a global information channel out of any community; if you don’t want that channel to exist, you might as well have no forums at all.

  15. It’s been my experience that it really isn’t that hard to create and manage this situation, Tim… Remember the goals from the service provider’s point of view:

    • have high-quality forum feedback
    • make it easy to find relevant feedback to the issues you are interested in
    • minimize the amount of policing and disruptive behavior required

    Providing lots of forums labelled with smaller group identities (say, by class and by server) gives you all of the above. Adding larger forums that are not intended to meet those above and serve merely as places to blow off steam is a minor addition; they can even be largely unpoliced.

    In addition, there are surprising labels you can come up with; a forum specific to game development can, if the culture is established early of it being very thoughtful and wordy, keep its size small and limited only to thoughtful participants.

    Again, it’s not that you don’t want that channel to exist — it’s that it is of limited utility for those trying to read it for governance purposes. (This leaves aside the issue of its utility in terms of datamining — a service like Intelliseek can glean much from such a channel).

  16. I guess I disagree with the latter claim you make–that the larger global channel is of limited utility for governance purposes. I think that’s about poor heuristics on the part of the readers plus a desire to make governance about the details and the mechanics and not the general will or public sphere of a virtual world. I know that developers or community managers feel like they have limited time, but as someone who watches those kinds of forums on the side, I don’t feel I have much trouble finding “high signal” messages.

    It’s too easy to just complain about the noise in a general forum, or dismiss it as filled with negativity, or some such. I’d tie this a bit to my running critique of the avoidance of sovereignity as a concept by developers, a desire to make governance a question of managing specific mechanics, playing factions off against one another while veiling interior processes of decision-making, and so on.

    If there’s a trust issue between player communities and developers, in fact, I’d say in part it’s because developers tend to want to decompose general, sweeping issues that involve the geist of their communities into game-mechanical questions that treat each class or race or faction as a community of transfer-seekers who can be met as an isolated constituency. This gives the general citizenry of a virtual world a sense of being evaded, of shouting into the void, of knowing about overall problems that developers refuse to speak to.

    It’s not unlike the problem technocratic bureaucrats have in responding to general social crises, or moments of public malaise. They want to meet those moments in manageable chunks, in terms of particular constituencies seeking service. But sometimes geist is geist. You won’t find and meet the tipping point moment in class or server forums; you’ll find it only in a general, noisy channel. When it comes time to speak to the citizens as a whole, that function gets ceded to public relations-speech, or there is an engagement that seems not to meet the general will head-on or in a way that corresponds to the social reality of the virtual world.

  17. I hear what you’re saying, Tim. And I think many aspects of your critique are dead on. But the fact remains that high signal messages are often not impactful messages (usually the opposite, actually — they tend to get lost). That smaller constituencies easily get lost in large forums. That changes often ARE made for individual constituencies, or to manage specific mechanics.

    I’ve been an advocate of direct interaction with “the citizens as a whole” for a long time; luncheons, the UO essays, House of Commons chats, and my methods of interaction on forums all point towards that. I will say, however, that the geist actually does manifest in the smaller forums — if it’s truly the geist, it will be apparent in the sum of the parts as well as in a general forum.

  18. […] I tried to make my third content chapter, but it’s a toughie. What’s worse, it covers much of the same ground that Raph is covering here and here, but from a different angle (and much more tersely).You see, there’s a couple of connected “bits” here. “Trust” is one thing. “Interaction/Friendship” are another thing. “Exploration” is another thing. But all three are intrinsically tied to social networking.One of my favorite game designs (of mine) is a game called “Spider Space”. Utilizing a unique resource allocation system, it encouraged a kind of social networking which would hopefully lead to high-trust environments. But more than that, it would lead to high-interaction (friendship) environments and high-exploration environments.Because all three of these things are related, and all three of them are related to content creation.A social network is one of the kinds of player generated content that all massively multiplayer games have. I don’t believe it is possible to keep players from forming into social groups, whether you call them guilds or clans or shmurgies.The key here is to remember that content is a living thing. Rather, it’s intended to be a living thing. Content created by a developer is not really a living thing. Players will waltz through it, barely affecting it and barely being affected by it. If they go back it is more of the same.Content created by a group of players, such as a social network, is a living, adaptive organism. Interacting with it keeps players entertained and coming back.Similarly, all the things you might think of as player generated content are really just symptoms of the real content. Someone creates a new quest, or a neat-looking car. You say, “yaaay, another piece of content!” But it’s not. Not meaningfully. That car, that quest, it’s a tiny blip. And chances are, it’s a meaningless blip, since it is a waste of time compared to the very best quests and cars.The important part of making a car or a quest is when people say, “that’s really cool! Have you thought about putting in hamsters?” or “that sucks, I bet I can make a better one.”That’s the content. The content is the part of the game which draws the player into making the little pieces of piffle. They aren’t the best pieces of piffle. They’re probably wastes of time. But the act of creating them and being challenged to do it better – that’s the content.That content is enabled by social networks more than any other thing. You can encourage players to do it better with game mechanics, of course. Most games do this – “levelling up was cool, wasn’t it? Now try to level up again!” But that’s a shallow method. It (A) doesn’t appeal to most people and (B) is tightly constrained and limited.Using a social network for this kind of encouragement gives a stronger feedback system and a much wider variety of useful producables. In short, it provides better, stronger content creation loop than the insignificant “level up and choose equipment” content “creation” loops in most games.But a social network needs to have a few features in order to encourage content creation. These features are always the same, regardless as to what kind of content you allow your players to create.A) Trust. Yes, trust is a critical part of content creation. There are two ways to enhance trust in this kind of environment. The first is to make the ruleset support zero-trust interactions. Like contracts or trades. The second is to keep track of good and bad interaction records – but be careful, that can be farmed and griefed without much difficulty.B) Interaction. Trust isn’t the only important detail. Interaction is, if anything, more important. People will learn who to trust through repeated interactions, even if they don’t have any real “trust support” system. But people who don’t interact won’t interact. So your system has to reward interaction. Not just creating content, but finding content others have created. Not just talking, but talking meaningfully about content. This needs to be balanced, however, because it can easily lead to swamping popular creators. Hence…C) Membership Control. Your social network is only a help to you at very specific sizes. If it is too small, it is useless. Too large, it is full of noise and gets in your way more than helping you create content.Therefore, you need to have a way of (A) getting people to have significant social networks and (B) keeping them from being assaulted by every fanboy that notices them.The so-called “LFG” syndrome falls under here. People without an adequate social network stand by the sidelines, unable to proceed until they get some useful interactions. So they shout, “Looking for group”, and cross their fingers.You need to have something that draws people into social groups. Something which makes even useless newbs into something useful. Simultaneously, you have to have something which gently keeps people from swamping one content creator – both for the sake of that creator and to allow for greater numbers of content creators.Spider Space had all of these. These ideas really aren’t new to me, but they are the subject of chapter 3 of content creation. However, this chapter doesn’t taste all polished and textbooky, does it?Hmmm. […]

  19. […] Somewhat related to the Reputation in MMOGs article and commentary from last week, Raph Koster has a two part piece looking at webs of trust and how trust is relayed through communities. […]

  20. […] https://www.raphkoster.com/?p=310In part I I gave the basic grounding for my take on the issues of trust, reputation, and policing. Now I want to dig into some of the deeper issues there. […]

  21. […] Raph Koster (former UO and SWG developer) has an interesting post on trust on his website: https://www.raphkoster.com/?p=308 It might be somewhat relevant to your theory, especially this part: Quote: […]

  22. […] Tepper writes, "I fully expected this unique form of in-game democracy that we use, to breed an ever-growing, increasingly intrusive government, just as real-world democracy often does. In fact, one big advantage I saw to Tellings was the chance to undo the implosion that I thought was inevitable. The culture that evolved in ATITD was just the opposite, and I still don’t have a good explanation. The force of law has always been applied with the lightest touch. And it’s not just in law that Egypt has been cautious…In three years we’ve elected about 20 Demi-Pharaohs, players with the power to permanently exile up to 7 of their countrymen. And in three years, that power has never been used…" I don’t think I’m nearly as surprised as Tepper seems to be. Partly because I think his expectation here invokes a common form of cyberlibertarian narrative about contemporary American politics that is at the least an over-simplified hypothesis about the development of post-1945 liberal democracies, that refers implicitly to some kind of universal tendency of individuals to surrender freedom to authority. Partly because, as some players have also observed, the size of the playerbase of A Tale in the Desert promotes a more trusting and close-knit community (an issue Raph Koster has been writing about lately). Partly also, as some ATITD players have noted, there’s a selection filter here, that ATITD is a boutique product far less likely to attract the kinds of griefers and antisocial players whoWarhammer Online von Mythic Entertainment. players in other synthetic worlds might desperately wishMMORPG, dass im Jahre 2005 noch w�hrend der Entwicklung eingestellt wurde. to control or expel, and far more likely to attract people with a lively interest in participating in the political affairs of their synthetic world. Link: Pharaoh’s Expectations 150)?150:this.scrollHeight)”> __________________ The tools suck! — Raph Koster […]

  23. […] Partly because, as some players have also observed, the size of the playerbase of A Tale in the Desert promotes a more trusting and close-knit community (an issue Raph Koster has been writing about lately). Partly also, as some ATITD players have noted, there’s a selection filter here, that ATITD is a boutique product far less likely to attract the kinds of griefers and antisocial players who players in other synthetic worlds might desperately wish to control or expel, and far more likely to attract people with a lively interest in participating in the political affairs of their synthetic world. […]

  24. […] Read Part I Read Part II Read A side note […]

  25. […] We might also ask ourselves whether the communitarian model is in fact a polite fiction we sell ourselves; Tim Burke touched on this in his comments on Part I. Is there such as thing as true bottom-up authority? After all, communitarian ideals are driven by consensus, but in practice consensus, particularly in small groups, is driven by strongmen, by natural leaders, by persuasive techniques. This is precisely why the development of things like secret balloting was such a core driver of human socil development. […]

  26. […] There was a long discussion about player trust at Raph’s site a few months ago, so I won’t open up that can of worms again, and he is even skirting the edge of this topic today over on his site.* […]

  27. […] – Three part analysis of Trust, with many comments, and applied to gaming worlds in particular, at https://www.raphkoster.com/2006/02/04/on-trust-part-i/ […]

  28. […] Raph has been building a series on Trust. I find the meandering through all things real and virtual, personal and sociological, to be a great springboard for one of the potential themes throughout: the relationship between MMOG developers/publishers and the players. […]

  29. […] That’s really the problem; of the three, it’s having a relationship with a community that doesn’t scale very well, as I have written about before in my series “On Trust” (1, 2, side note, 3, 3.5. […]

  30. […] trust in humans. We trust where there is repeated interaction that is successful. (See my series: On Trust (part I), On Trust, Part II, On Trust, Part III). Games typically rely on iteration, and therefore can be […]

Sorry, the comment form is closed at this time.