For years the trend has been to move pretty much everything to the cloud. Finance apps? Cloud. Badging system? Cloud. Meat thermometer? Cloud. One of the major benefits is the minimal time to deployment compared to on-prem solutions. But the cloud brings its own set of risks, as everyone is now more aware of than ever, thanks to the media’s newfound appetite for reporting web vulnerabilities and data breaches.
Even in the pre-Heartbleed era, it was a known fact that there were more “black hats” than “white hats”; people working on finding security vulnerabilities for exploitation, black hats, instead of patching, white hats. This imbalance is greater now than ever before, and with the ubiquity of cloud solutions, the number of targets is exponentially greater. Therefore, building out an internal “red team”, security engineers hunting for the same vulnerabilities black hats are looking for, is one way companies are addressing this. However, building out an internal red team can be challenging, even for large enterprises. The combination of a very competitive market and limited resources to go around, makes it increasingly difficult to have a fully staffed red team internally. Enter the crowd.
Crowdsourcing is gaining popularity akin to the uptick in cloud adoption we started seeing many years ago. There has been an increase in threat intelligence sharing vendors, special interest collaboration groups, and more bug bounty programs being launched. OneLogin has been leveraging the former two in some capacity over the years, and we are now working on launching our own bug bounty program.
Typically, bug bounty programs were the realm of companies like Google, Facebook, or Mozilla. But similar to the cloud opening doors for companies to deploy systems they might not be able to otherwise, crowdsourced bug bounty programs are opening doors for companies that might not have sufficient resources to manage a program on their own. These vendors provide the initial issue triaging; weeding out duplicate submissions, verifying that submissions have accurate reproduction steps, and assigning a priority to them, and of course, managing the overhead related to maintaining a bug bounty program; the vetting of security researchers (the crowd), managing payouts, and running a reporting platform.
To get our feet wet, we ran a timeboxed private bug bounty program with Bugcrowd. A private program, as the name implies, is by invite only, and this helped throttle the influx of submissions, which typically spike whenever a new public bug bounty program is launched. There are also other benefits to private bug bounty programs, including a better signal (valid submissions) to noise (invalid submissions) ratio and having it timeboxed focuses researchers’ efforts to a defined period of time.
In the end, running a private bug bounty program is pretty similar to the periodic third party pen tests we have been running for years. One major difference is that the number of researchers looking for bugs is exponentially larger since they are crowdsourced, which of course has its benefits and drawbacks since each researcher is working independently. Other than that, the quality of information provided for each bug and for the program overall was similar to that of a standard pen test, although this can vary from vendor to vendor.
This type of program can, in theory, replace periodic pen tests, but there are enough differences between the two that using these type of programs to augment existing pen tests is the direction we are going to take for now. Maybe there is a public bug bounty program in our future, but for now, we can get the benefits of the crowd without being drowned by a lot of noise. All this supports our ongoing objective of mitigating the risks inherent to the cloud.