Social

Setting up bug bounties for success


Bug bounties end up in the news with some regularity, usually for the wrong reasons. I've been itching to write
about that for a while - but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.



What do you get out of bug bounties?




There's plenty of differing views, but I like to think of such programs
simply as a bid on researchers' time. In the most basic sense, you get three benefits:





  • Improved ability to detect bugs in production before they become major incidents.

  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.

  • A robust talent pipeline for when you need to hire.



What bug bounties don't offer?




You don't get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.




Your reward structure can steer them toward the targets and bugs you care about, but it's difficult to fully
eliminate this inherent skew. There's only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.



Don't you have to outcompete the black market to get all the "good" bugs?




There is a free market price discovery component to it all: if you're not getting the engagement you
were hoping for, you should probably consider paying more.




That said, there are going to be researchers who'd rather hurt you than work for you, no matter how much you pay;
you don't have to win them over, and you don't have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don't want their work to be used to crack down on dissidents in Egypt or Vietnam.




Another factor is that you're paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn't mind a researcher blogging
about their work.




In fact, while money is important, you will probably find out that it's not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.



How do I prevent it all from going horribly wrong?




Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it's a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.




This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.




Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.



But what if that's not enough?




As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it's not a sign of weakness to say "sorry, we messed up". And you will almost certainly not end
up in the courtroom for doing so.




It's good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.



What about extortion?




You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.




To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn't their own.



So... are there any downsides?




Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.




When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.




The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.




Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place - but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.



So... should I try it?




There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.




In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won't know for sure until you try.


0 nhận xét:

Đăng nhận xét