Information sharing is something of a holy grail in computer security. The idea is simple enough: if we could only find out the sort of attacks our peers are experiencing, then we could use that information to protect ourselves. But, as Alexandre Sieira pointed out at the FIRST conference, this creates a trust paradox. Before I share my experiences of being attacked, I need to trust that those I share with won’t misuse or mishandle that information in ways that damage my security or reputation. One solution to that is to share anonymously, so the information can’t be associated with a particular victim. But if you are going to act on information that you receive, you need to trust the source that provided it, which is much harder if you don’t know who they are!
One way around this is to pass information through a third party, who can provide assurances about the source and quality without disclosing the identity of the source. That model is often followed by Information Sharing and Analysis Centres (ISACs), but it requires a well-resourced central point: one of the reasons why ISAC membership tends to have a significant price tag.
Alexandre proposed another model: that used by social networks. These manage to provide the reputation information that recipients need for trust, while at the same time giving contributors the level of anonymity they need. This relies on a couple of tools. First public feedback, so recipients can indicate the value they obtained from what was shared. This helps other recipients trust the shared information; it also helps the contributor (and others) work out what information their peers would find it useful for them to share. Second, because contributions can be identified as coming from the same source, over time the reputation of the source builds up, based on their previous contributions. This way people and organisations can come to trust one another (both as contributors and recipients) without needing to know their real world identities.
But social networks also suggest that even in a fully-trusting group, we shouldn’t expect universal sharing to break out. In any community of humans it turns out that some people share a lot more than others, and that there’s always likely to be significantly more information shared bilaterally than in an open (even though restricted entry) group. Observations across different communities seem to suggest that private/bilateral sharing naturally settles at about 80% of the total information exchanged.
And even a successful sharing community won’t provide all the information an organisation needs to protect itself. Comparison of different security sharing communities, together with commercial and open source feeds, suggests that there are far more threats out there than ever make it on to any of these lists. Even if you subscribe to all of these, there will still be occasions when you are the first to encounter a particular threat. Every organisation needs the skills to recognise when that happens, to protect themselves based on experiences obtained in sharing groups, and to help others by sharing in return.