In a world where data storage is almost unlimited and algorithms promise to interrogate data to answer any question, it’s tempting for security teams to simply follow a “log everything, for ever” approach. At this week’s CSIRT Task Force in Malaga, Xavier Mertens suggested that traditional approaches are still preferable.
With the speed of modern networks and systems, logging everything is almost guaranteed to produce files far too big for humans to interpret, so incident responders become entirely dependent on algorithms. And, since those algorithms don’t know which events or incidents really matter to the organisation, they may well highlight or explain the wrong things. Xavier suggested that having too many logs may well give organisations a false sense of security.
Another problem with this approach is that no one knows which logs are actually important, so it’s hard to work out which are worth spending time on when, for example, their format changes (if we even notice), or they are challenged by accountants or regulators.
So it seems it’s still better to start from a purposive approach: think through the kinds of incident that it’s most important for you to be able to deal with, work out which logs you need to investigate those, and ensure those are available for as long as there is any point in investigating.
If, as still seems to be too common, a breach remains undiscovered for months or years, investigation is likely to be more trouble than it is worth, since it’s likely that some essential knowledge will have been lost, and the attacker will have had ample time to do all the damage they want. Belated discovery of breaches is a sign that we need to improve our detection processes, not that we need to retain even more logs for even longer.