How can someone be intelligent enough to submit a research paper but stupid enough to fail at taking the most basic steps to hide their AI plagiarism?
What do we even do about something like this? Is there an organization that has the power to go through this list and blacklist each of the authors from future publications? Our society should have a zero tolerance policy for drivel like this. It cheapens the entire institution.
I wouldn't call it plagiarism, but not even reading the output of the LLM before pasting it into your paper shows a lack of professionalism and indicates that the data in that table might as well be the hallucination of an LLM. Not exactly a good look.
I disagree. LLMs fundamentally change the situation: They make it easier to "write" low quality publications and they make it harder to immediately recognize that a publication is low quality.
Before LLMs, one of the main deterrents of low quality work was that the effort required to write a low quality paper is within an order of magnitude compared to the effort for a high quality paper. Using LLMs, however, I can "write" hundreds of low quality papers (that look okay-ish on the surface) in the time it takes me to write a single high quality paper.
I also don't think that researchers publishing high quality papers will benefit that much from LLMs. I tried getting ChatGPT to reproduce an argument I made in my PhD thesis and it took me many tries before ChatGPT even produced something without factual errors. As for padding: Papers should ideally not contain any padding. If a section in your paper is so devoid of actual information that an LLM can write it, you should probably remove it altogether.
> Using LLMs, however, I can "write" hundreds of low quality papers (that look okay-ish on the surface) in the time it takes me to write a single high quality paper.
I've been thinking lately that one could probably leverage the same technology to evaluate the quality of research, possibly as a post-hoc verification.
> I also don't think that researchers publishing high quality papers will benefit that much from LLMs.
Oh, but they do! If not for anything, LaTeX formatting is tedious.
> Papers should ideally not contain any padding. If a section in your paper is so devoid of actual information that an LLM can write it, you should probably remove it altogether.
It is mostly correct, except for perhaps the abstract and some parts that provide structure for the paper (Outline, maybe Conclusion, ...). However, you can always seed an LLM with the bits of information you want explained and use the result as a basis for improvement. It can seed writing: there is no obligation to copy-paste and leave it as is.
> I've been thinking lately that one could probably leverage the same technology to evaluate the quality of research, possibly as a post-hoc verification.
I doubt it. They'll be just as effective as ChatGPT-detectors are. Completely useless.
> If not for anything, LaTeX formatting is tedious.
LaTeX formatting is a very, very, very small part of publishing research. At least it was for me.
> However, you can always seed an LLM with the bits of information you want explained and use the result as a basis for improvement.
That's what I tried to do with the argument in my PhD thesis. It didn't work. Also, at the point where you have meticulously thought through how to structure your argument (and that's what I needed to do to get ChatGPT to produce anything of value at all) you did 95% of the work already. Actually producing the text isn't the hard part of writing.
My point still stands: LLMs help way more if you want to produce low-quality papers than when you want to produce high-quality papers and as a consequence, the percentage of low-quality work will increase.
Hell, better that they DO include the prompt. Since that A: clearly discloses AI usage and B: If someone attempts to duplicate the experiment (if relevant), they can take their data (perhaps give the prompt input, at least a subset showing all structure, in an appendix) and run it through the prompt.
What do we even do about something like this? Is there an organization that has the power to go through this list and blacklist each of the authors from future publications? Our society should have a zero tolerance policy for drivel like this. It cheapens the entire institution.