AI Conference Balks at Nearly 500 Papers for Improper AI Use

ICML used hidden prompts to expose AI-written peer reviews
Posted Mar 26, 2026 6:32 AM CDT
AI Conference Balks at Nearly 500 Papers for Improper AI Use
Stock photo.   (Getty Images/Maks_Lab)

A top AI conference just turned the tables on artificial intelligence, by using it to catch reviewers secretly relying on AI. Per Nature, the International Conference on Machine Learning, set for July in the South Korean capital of Seoul, turned away 497 papers—about 2% of all entries—after determining that their authors had broken its no-AI rule while reviewing other researchers' work. ICML requires at least one author on each submitted paper to also serve as a reviewer for fellow scientists.

In this case, organizers embedded hidden instructions in manuscripts that only large language models would see. Those prompts nudged the AI to slip specific phrases into the resulting review; when those phrases appeared in the review, humans were able to confirm AI involvement. In all, 506 reviewers were found to have used LLMs, producing 795 suspect reviews. Organizers framed the move as a stand in the name of trust, not a verdict on review quality or the intentions of the reviewers in question.

The crackdown has split the research community, with some praising the hard line and others warning it will simply push AI use further underground, all as conferences and journals scramble to write clearer rules for machine help in peer review. A recent survey shows that more than half of researchers have tapped into AI during the peer-review process, per Nature. One Stanford computer scientist is looking further into this topic, and his findings so far indicate that "AI excels at spotting gaps, but judgment calls still need humans."

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X