Social

Exploitation modelling matters more than we think

Our own Krzysztof Kotowicz put together a pretty neat site called the Bughunter University. The first part of the site deals with some of the most common non-qualifying issues that are reported to our Vulnerability Reward Program. The entries range from mildly humorous to ones that still attract some debate; it's a pretty good read, even if just for the funny bits.



Just as interestingly, the second part of the site also touches on topics that go well beyond the world of web vulnerability rewards. One page in particular deals with the process of thinking through, and then succinctly and carefully describing, the hypothetical scenario surrounding the exploitation of the bugs we find - especially if the bugs are major, novel, or interesting in any other way.



This process is often shunned as unnecessary; more often than not, I see this discussion missing, or done in a perfunctory way, in conference presentations, research papers, or even the reports produced as the output of commercial penetration tests. That's unfortunate: we tend to be more fallible than we think we are. The seemingly redundant exercise in attack modelling forces us to employ a degree of intellectual rigor that often helps spot fatal leaps in our thought process and correct them early on.



Perhaps the most common fallacy of this sort is the construction of security attacks that fully depend on the exposure to pre-existing risks of a magnitude that is comparable or greater than the danger posed by the new attack. Familiar examples of this trend may include:



  • Attacks on account data that can be performed only if the attacker already has shell-level access to said account. Some of research in this category deals with the ability to extract HTTP cookies by examining process memory or disk, or to backdoor the browser by placing a DLL in a directory not accessible to other UIDs. Other publications may focus on exploiting buffer overflows in non-privileged programs through a route that is unlikely to ever be exposed to the outside world.



  • Attacks that require physical access to brick or otherwise disable a commodity computing device. After all, in almost all cases, having the attacker bring a hammer or wire cutters will work just as well.



  • Web application security issues that are exploitable only against users who are using badly outdated browsers or plugins. Sure, the attack may work - but so will dozens of remote code execution and SOP bypass flaws that the client software is already known to be vulnerable to.



  • New, specific types of attacks that work only against victims who already exhibit behaviors well-understood to carry unreasonable risk - say, the willingness to retype account credentials without looking at the address bar, or to accept and execute unsolicited downloads.



  • Sleight-of-hand vectors that assume, without explaining why, that the attacker can obtain or tamper with some types of secrets (e.g., capability-bearing URLs), but not others (e.g., user's cookies, passwords, server's private SSL keys), despite their apparent similarity.





Some theorists argue that security issues exist independently of exploitation vectors, and that they must be remedied regardless of whether one can envision a probable attack vector. Perhaps this distinction is useful in some contexts - but it is still our responsibility to precisely and unambiguously differentiate between immediate hazards and more abstract thought experiments of that latter kind.

0 nhận xét:

Đăng nhận xét