Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of them are. But my problem with this claim is, good papers could still be classified into this formula as well. If talking about field like image recognition, almost everything after ResNet is incremental improvement to it, most of them are BS, but there are still some interesting new ideas even their gain is just marginal.


I think the difference is that many of the new algos aren't even showing reproducible incremental improvements. Rather, they just benefit from the massive amounts of data available.

Though, I fully grant that the problem is as much that they are optimizing different things. Sometimes, the goal was just faster training, not better accuracy. (Though, I'm curious how fungible those are... Something that trains faster would ostensibly train for better accuracy at the same times, no?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: