Saturday, February 11, 2017

Vulnerability disclosure in an era of vulnerability rewards

Note: This (and every post in this blog) is a personal blog post which expresses my personal opinion, and doesn't necessarily have to be the same opinion as my employer.

Recently a few bug hunters have been taking rounds around the internet, looking for vulnerabilities and then contacting the website owner asking for money to disclose these to them. This prompted a discussion on Twitter which I thought was interesting.

What we know today as Bug Bounty Programs (or more aptly named, Vulnerability Reward Programs), was started by some individuals proactively searching for vulnerabilities on their time for fun, curiosity or to evaluate the security of the software they use, and then reported them publicly or privately to the vendors. In many cases, these bug hunters got no permission or blessing from the vendor ahead of time, but vendors thought that activity was useful for them, so they decided to formalize it and launched reward programs. Fast forward to 2017 and tens of millions of dollars have been paid to thousands of bug hunters all around the world. That turned out pretty well, after all.

However, this created a new normal, where bug hunters might expect to get paid for their research "per bug". And consequently, this also created an incentive for a few bug hunters to reach out to vendors without these programs with the purpose of hoping they can convince the vendor to start one. I don't think the bug hunters doing that are "bad people". What they are doing is slightly better than sitting on the bugs, or sharing them on private forums as it used to be the norm 10 years ago, and while I definitely think they should be disclosing the vulns and getting them fixed, I respect their freedom not to.

That said, this left those vendors without reward programs (either for lack of money, or lack of support) at odds, in that they get the worst of both worlds. Little attention from skilled professional bug hunters and tons of attention by those that are just looking to make money out of it. And at least some of those vendors, ended up perceiving the behavior as extortion.

This can result in a very dangerous "us vs. them" mentality. We shouldn't be fighting with each other on this. We shouldn't be calling each other scammers. That has the only side effect of burning bridges and alienating the bug hunters we need to work the most closely with.

What I think we should do, as vendors, is politely and consistently decline every attempt to disclose issues in a way that is unfair or dangerous to users and other bug hunters. That means, if you can't give out rewards , don't cave in for those asking you for money by email. If you already have a reward program, don't bend or change the rules for these people.

Instead, we, as vendors, have to invest to create a community of bug hunters around us. Many people are willing to invest time to help vendors, even when money is not involved. Reasons for that vary (they do it for the challenge, curiosity, or fun), and in exchange for their help, many of them often appreciate a "thank you" as appreciation, and recognition in some advisory. Vendors need to be welcoming, transparent and appreciative. This is important for any vendor that wants to collaborate with independent security researchers, even more important for those vendors just starting to build their community, and specially important for those that need a lot of help and don't have much resources.

What I think we should do, as security researchers, is to not let a few vendors give the wrong impression of the rest of the industry. We should continue to be curious, and continue to advance the state of the art. However, just as jaywalking carries risks, we need to be careful on how we do this work. Pentesting without authorization is very risky, even more so if the testing causes damage, or gives the impression it was malicious.

Instead, as researchers we should treat vendors as we would treat our neighbors and be respectful and polite, not doing to them what you wouldn't want them to do to you. I think 99.99% of researchers already behave like this, and the few that don't are just learning. Let's make sure we continue to grow our community with respect and humility and help those that are just starting.

The security disclosure world of today is a lot better than how it was 10 to 20 years ago, and I'm glad to see such great relationships between vendors and security researchers. Let's keep on improving it 😊.

Wednesday, February 08, 2017

🤷 Unpatched (0day) jQuery Mobile XSS

TL;DR - Any website that uses jQuery Mobile and has an open redirect is now vulnerable to XSS - and there's nothing you can do about it, there's not even patch  ¯\_(ツ)_/¯ .

jQuery Mobile is a cool jQuery UI system that makes building mobile apps easier. It does some part of what other frameworks like Ember and Angular do for routing. Pretty cool, and useful. Also vulnerable to XSS.

While researching CSP bypasses a few months ago, I noticed that jQuery Mobile had this funky behavior in which it would fetch any URL in the location.hash and put it in innerHTML. I thought that was pretty weird, so decided to see if it was vulnerable to XSS.

Turns out it is!

The bug


The summary is:

  1. jQuery Mobile checks if you have anything in location.hash.
  2. If your location.hash looks like a URL, it will try to set history.pushState on it, then it will do an XMLHttpRequest to it.
  3. Then it will just innerHTML the response.
As a strange saving grace, actually, the fact that it tries to call history.pushState first, makes the attack a little bit harder to accomplish, since you can't set history.pushState to cross-origin URLs, so in theory this should be safe.

But it isn't, because if you have any open redirect you suddenly are vulnerable to XSS. Since the open redirect would be same origin as far as history.pushState is concerned.

So.. you want to see a demo, I'm sure. Here we go:
http://jquery-mobile-xss.appspot.com/#/redirect?url=http://sirdarckcat.github.io/xss/img-src.html
The code is here (super simple).

The disclosure


Fairly simple bug, super easy to find! I wouldn't be surprised if other people had found out about this already. But I contacted jQuery Mobile, and told them about this, and explained the risk.

The jQuery Mobile team explained that they consider the Open Redirect to be the vulnerability, and not their behavior of fetching and inlining, and that they wouldn't want to make a change because that might break existing applications. This means that there won't be a patch as far as I have been informed. The jQuery mobile team suggests to 403 all requests made from XHR that might result in a redirect.

This means that every website that uses jQuery Mobile, and has any open redirect anywhere is vulnerable to XSS.

Also, as a bonus, even if you use a CSP policy with nonces, the bug is still exploitable today by stealing the nonce first. The only type of CSP policy that is safe is one that uses hashes or whitelists alone.

The victim

jQuery Mobile is actually pretty popular! Here's a graph of Stack Overflow questions, over time.

And here's a graph of jQuery Mobile usage statistics over time as well:

You can recreate these graphs here and here. So, we can say we are likely to see this in around 1 or 2 websites that we visit every week. Pretty neat, IMHO.

I don't know how common are open redirects, but I know that most websites have them. Google doesn't consider them vulnerabilities (disclaimer, I work in Google - but this is a personal blog post), but OWASP does (disclaimer, I also considered them to be vulnerabilities in 2013). So, in a way, I don't think jQuery Mobile is completely wrong here on ignoring this.

Now, I anyway wanted to quantify how common it is to have an open redirect, so I decided to go to Alexa and list an open redirect for some of the top websites. Note that open redirect in this context includes "signed" redirects, since those can be used for XSS.

Here's a list from Alexa:
  1. Google
  2. YouTube
  3. Facebook
  4. Baidu
  5. Yahoo
I also thought it would be interesting to find an open redirect on jQuery's website, to see if a random site and not just the top might have one, and while I did find they use Trac which has an Open Redirect, I wasn't able to test it because I don't have access to their bug tracker =(.

Conclusion

One opportunity for further research, if you have time in your hands is to try to find a way to make this bug work without the need of an Open Redirect. I tried to make it work, but it didn't work out.

In my experience, Open Redirects are very common, and they are also a common source of bugs (some of them cool). Perhaps we should start fixing Open Redirects. Or perhaps we should be more consistent on not treating them as vulnerabilities. Either way, for as long as we have this disagreement in our industry, we at least get to enjoy some XSS bugs.

If you feel motivated to do something about this, the jQuery team suggested to send a pull request to their documentation to warn developers of this behavior, so I encourage you to do that! Or just help me out spread the word of this bug

Thanks for reading, and I hope you liked this! If you have any comments please comment below or on Twitter. =)