Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ConvNets have gotten popular because of their strong empirical results. All the recent work on visualizing CNNs suggests that the community working on Deep Learning still has a lot to learn about their own algorithms.

But high-level notions like a Jaguar is a cat-like animal aren't necessary to perform well on an N-way classification task like ImageNet.

What's more important to note is everybody knows there's plenty wrong with a pure appearance-based approach like CNNs. Every few years a new approach pops up that is based on ontologies, an approach inspired by Plato, etc, but these systems require a lot of time and effort. More importantly, they don't perform as well on large-scale benchmarks. In the publish-or-perish world, you can jump on the CNN bandwagon or start reading Aristotle's metaphysics and never earn your PhD.



I doubt that reviewers for NIPS would reject a paper with a novel approach because it didn't perform at best in class level, provided it offered a way forward.

If it doesn't work at all, or isn't a new idea, that's different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: