Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a practical question of whether this is true for realistically encountered problems, with a sufficient threshold on both the size and utility of a "significant task" and with realistic numbers of cores.

Without a requirement of utility it's easy to come up with counterexamples from math, eg "does the Collatz starting from Graham's number reach one?" Once you've exhausted cores that can be used for the actual arithmetic, you are still gated by the decision-making at each step so cores cannot work too far "ahead" of each other. There may well be much smarter things we can do than brute force, but that's not "just coding work" at that point.

Theoretically, it doesn't hold - at some point you have split apart everything that can be split, and you are left with some essential chains of data dependency that cannot be further parallelized.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: