FWIW I thought it read like reviewer 2, which makes me actually think they have a relationship with academia.
The problem I have with their comment is that it rejects the critique by pointing to a different issue. As if there is a singular issue with academia that leads to the mess. So it comes off as a typical "reviewer 2" comment to me where the is more complaint and disagreement than critique.
FWIW, I think we in academia need to embrace the fuzziness and noise of evaluation. I think the issue is that by trying to make sense out of an extremely noisy process we placed too much value in the (still noisy) signals we can use. It is a problem to think that these are objective answers and deny the existence of Goodhart's Law (this is especially ironic in ML where any intro class discusses reward hacking). And in this, I think there's a strong coupling between cgshep's and chipdart's complaints.
As for publishing, I think we also lost sight of the main reason we publish: to communicate with our peers. Publishers played an important role since not even a century ago we could not make our works trivially available to our peers. But now the problem is information overload, not lack of information. And I think in part the review process is getting worse and worse each year as we place so little value on the act of reviewing, do not hold anyone accountable for a low quality review[0,1], do not hold ACs or Metas accountable for, and we have so many papers to review that I don't think we can expect high quality reviewing even if we actually incentivized it. I mean in ML we expect a few hundred ACs to oversee over ten thousand submissions?
My question is if we'll learn that the noisiness of the process and the randomness of rejection creates a negative feedback of papers. Where you "roll the dice" on the next conference. You resubmit without changes, as well as your new work (publish or perish baby). If we had quality reviewing at least this would push pressure for papers to get incrementally better instead of just being recycled. But recycling is a very efficient strategy right now and we've seen plenty of data to suggest it is.
[0] I understand the reasons for this. It is a tricky problem
[1] I'd actually argue we incentivize bad reviews. No one questions you when you reject a work and finding reasons to reject a work are far easier to accept one. There's always legitimate reasons to reject any work. Not to mention that the whole process is zero sum, since venue prestige is for some reason based on the percentage of papers rejected. As if there isn't even variance in year to year quality.
The problem I have with their comment is that it rejects the critique by pointing to a different issue. As if there is a singular issue with academia that leads to the mess. So it comes off as a typical "reviewer 2" comment to me where the is more complaint and disagreement than critique.
FWIW, I think we in academia need to embrace the fuzziness and noise of evaluation. I think the issue is that by trying to make sense out of an extremely noisy process we placed too much value in the (still noisy) signals we can use. It is a problem to think that these are objective answers and deny the existence of Goodhart's Law (this is especially ironic in ML where any intro class discusses reward hacking). And in this, I think there's a strong coupling between cgshep's and chipdart's complaints.
As for publishing, I think we also lost sight of the main reason we publish: to communicate with our peers. Publishers played an important role since not even a century ago we could not make our works trivially available to our peers. But now the problem is information overload, not lack of information. And I think in part the review process is getting worse and worse each year as we place so little value on the act of reviewing, do not hold anyone accountable for a low quality review[0,1], do not hold ACs or Metas accountable for, and we have so many papers to review that I don't think we can expect high quality reviewing even if we actually incentivized it. I mean in ML we expect a few hundred ACs to oversee over ten thousand submissions?
My question is if we'll learn that the noisiness of the process and the randomness of rejection creates a negative feedback of papers. Where you "roll the dice" on the next conference. You resubmit without changes, as well as your new work (publish or perish baby). If we had quality reviewing at least this would push pressure for papers to get incrementally better instead of just being recycled. But recycling is a very efficient strategy right now and we've seen plenty of data to suggest it is.
[0] I understand the reasons for this. It is a tricky problem
[1] I'd actually argue we incentivize bad reviews. No one questions you when you reject a work and finding reasons to reject a work are far easier to accept one. There's always legitimate reasons to reject any work. Not to mention that the whole process is zero sum, since venue prestige is for some reason based on the percentage of papers rejected. As if there isn't even variance in year to year quality.