Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Based on your comment, the effect could be larger as well as smaller.

The reality of any underpowered study could always be “larger as well as smaller”. This statement doesn’t add anything to the conversation.

The mistake is pivoting around poorly structured and underpowered research.

> All research is met on HN by people who know better and will tell you why it's flawed.

This is a misunderstanding. People who know how to read studies will always be aware of the limitations.

There’s a difference between saying “everything is flawed” and pointing out the limitations. Most early research comes with significant limitations like small sample sizes or large cofounders. You have to understand these in conjunction with the results to know how to interpret it.

There’s a cynical approach where people see discussion of limitations, don’t understand it, and instead go into a mode where they think it’s smarter to ignore all criticisms equally because every paper attracts criticisms.

This is just lazy cynicism, though. There are different degrees of criticisms and you have to be able to see the difference between something like a slightly underpowered study, and something like this paper where the authors threw a lot of regressions at a lot of numbers and kind of sort of claimed to have found a trend.



In this case, it only takes a few seconds to find multiple studies confirming the effect.

For example

https://onlinelibrary.wiley.com/doi/10.1111/ina.12042

https://onlinelibrary.wiley.com/doi/10.1111/j.1600-0668.2010...


[flagged]


99% of times it's smaller, saying there is an equal chance it's smaller or bigger is also false. It could be, probability is strong that it won't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: