Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You'll find that clinical trials are regularly halted when the treatment is found to be markedly superior or inferior to placebo or the control group.

The original study that showed aspirin's effect on heart attack prevention comes to mind for the former circumstance, and a number of HIV prevention studies for the latter.



Halting a clinical trial when the result is clear is very different from skipping the clinical trial on the basis of a correlation study or two. Your comparison is so nonsensical that either you are being deliberately disingenuous or you don't understand statistics at all.

Either way, no point in continuing this.


I was being charitable to your example, because generally speaking there are no observational studies done before a drug is put up for approval - that's not how the approval pipeline works. The closest I can come up to your example is the occasional off-label use of a drug for some other condition, but the reports from those are largely small n studies. That's entirely different from a series of studies based on NHANES.

Beyond that, in an intentional randomized trial, rather than the 'happy accident' like the Oregon study, the actual control is not 'Nothing' but the medically indicated standard of care. Studies are often required to provide medical care, education, etc. to their participants. I cannot imagine a study managing to get "We deny a bunch of folks health insurance" by an IRB unless it was an externally forced process, like the Oregon study.

Your insult about not understanding statistics, in addition to being off-base, is rather spurious. This isn't a statistical question, it's a public health ethics question. Statistics doesn't really come into whether or not "Keep a bunch of people from accessing healthcare" will get nailed by an approval board.

Also, the FDA often does take observational evidence into account, especially when expanding things like what age range a drug is medically indicated for.


While we're on the topic, Bayesianwitch.com's description:

"Your new homepage goes viral, but you aren't sure what copy is converting. Hook that copy up to BayesianWitch and only converting copy will be showing. No waiting days for the answers from an A/B test."

Seems to imply a reliance on "correlational" data. Is there some hidden randomization in there? Or are you giving your clients a lower standard of evidence?


We are using a lower standard of evidence than medical decisions. The goal is to get as many clicks/conversions as possible in aggregate, most of the time. I.e., if you have a call to action ("3 day sale", "spring sale", "march sale") that dies in 3 days, we'll do the best we can to increase your conversions in those 3 days.

Similarly, if you have 10,000 seo-optimized microsites, each with traffic too low for a per-site A/B test, we'll improve your conversion rate across the 10,000 microsites.

If you want to make a long term change (e.g., logo, button color, feature) for a high traffic site (your one and only landing page) you are better off using a traditional A/B test.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: