r/MachineLearning Β· Β· 1 min read

arXiv implements 1-year ban for papers containing incontrovertible evidence of unchecked LLM-generated errors, such as hallucinated references or results. [N]

Mirrored from r/MachineLearning for archival readability. Support the source by reading on the original site.

From Thomas G. Dietterich (arXiv moderator for cs.LG) on 𝕏 (thread):
https://x.com/tdietterich/status/2055000956144935055
https://xcancel.com/tdietterich/status/2055000956144935055

"Attention arXiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated.

If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).

We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper.

The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.

Examples of incontrovertible evidence: hallucinated references, meta-comments from the LLM ("here is a 200 word summary; would you like me to make any changes?"; "the data in this table is illustrative, fill it in with the real numbers from your experiments")."

submitted by /u/Nunki08
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds β€” email code or GitHub.

Sign in β†’

No comments yet. Sign in and be the first to say something.

More from r/MachineLearning