I’d like to check my understanding of the severity of signatures matching on exfiltration traffic.
To me, “exfiltration” means that some malicious actor is removing data from my network without permission.
Further, a signature matching on exfiltration behavior should either be for an attempt at exfiltration, which may or may not succeed, or a for observed exfiltration that could only have been the result of a successful attack.
According to Rules Severities, signatures of severity “Major” indicate “an active attempt at compromise of a service or end system”.
“Critical” severity means “that an end system is likely to be compromised based on the activity detected”.
Mapping these to exfiltation-related signatures, I would think that means “Major” indicates an attempt to compromise a host that may have succeeded, while “Critical” means that traffic was observed that could have only come from a compromised host.
As of today, I count 367 signatures in the ET Open rules that have “exfil” (irrespective of case) in the msg field. When you eliminate rules in the HUNTING, ADWARE_PUP, and INFO categories, and filter out those whose msg fields indicate uncertainty (“possible”, “attempt”, “suspect”), there are 291. Of those, 259 are of “Major” severity. Only two are of “Critical” severity.
Does that mean that only two of these signatures are matching on observed exfiltration traffic? Or were they perhaps written when “Major” and “Critical” meant different things, and should be re-categorized as “Critical”?
Thanks @samjenk , that’s great feedback. Internal to the team I think we’ve got to become more comfortable with the use of ‘critical’ for severity. I know when I was catching alerts those certainly sprang to the top of the queue - but the balance of that is wanting to make sure when something fires with that severity it has fidelity. I’ll review and likely make some metadata changes over the weekend.
Thanks very much for considering this, @rgonzalez. I worry about “fidelity” too, but as described in the Rules Severities, “severity” has more to do with how badly your infrastructure is affected by the thing the rule detects. When I’m trying to assess whether I think a rule will give me false positives, I’m looking at tells that give me confidence that the rule is going to match on something very specific. A rule that’s very specific has a higher chance of being “accurate”, in my experience, and I regard those of high accuracy and Critical severity as the highest “fidelity”.
Could expanding the use of the “Confidence” tag help drive us toward a “fidelity” signal? That is, if something is of Critical severity and “High” confidence, it has high “fidelity”? I’ve been designing a workflow for assessing rules automatically that makes multi-field comparisons like this outside of surciata-update, generating a disable.conf file full of rules I want to turn off. Having the Confidence field available for more rules would really help.
And for the record, I’m not going to go around demanding that all rules be of “Critical” severity and “High” confidence. There are times where I’m less concerned about a rule’s accuracy, and more concerned about getting a detection for a suspected case of something bad. For example, I’ve been impressed by how quickly you all have gotten signatures into the rule set when there’s a Zero Day exploit. When that happens, I don’t mind alerting on something that’s of medium or low accuracy; if it’s an emerging threat and is of Critical severity, I want to err on the side of alerting on it. If they can’t start out as being highly accurate, I can use the updated_at field to help me tune out lower-confidence signatures as they age using the process I’m developing.