Research Fraud at MIT: High-Profile Study Was Too Good to Be True

An internal review at MIT has concluded that a paper submitted by a PhD student should be withdrawn from both arXiv and The Quarterly Journal of Economics, where it was submitted for publication.

This is no normal first-year PhD paper. The article attracted coverage from The Wall Street Journal, The Atlantic, and Nature. The study analysed use of AI by over 1,000 scientists at a large US company — it’s a huge and potentially influential piece of work.

Last year, some people, including Stuart Buck, questioned the articles credibility. It seemed to be an unbelievable piece of work for a single junior researcher. 

They were right to be suspicious. The results were entirely fabricated.

In a detailed blog post, Ben Shindel highlights the signs of scientific fraud: suspiciously neat data, perfect results, and conclusions too good to be true. It’s reminiscent of the Jan Hendrik Schön case — a researcher at a prestigious institution producing perfect data that ultimately didn’t exist.

Shindel also notes that the fraud might have been identified sooner if arXiv enabled comments. It’s often difficult for researchers to easily reach peers, institutions, or journals with their concerns.

This is where Signals comes in. By placing Expert Contributions on the preprint, researchers are able to flag specific concerns and alert the wider research community. 

Expert Contributions flagging issues in the arXiv preprint

These contributions can be added at any time. In this case, the Signals team was able to identify this issue over the weekend and update the network, flagging the problem in real time.

The article has been online since December 2024 and has already been cited several times. The citing authors, potential readers, and publishers need to know that those works are built on a fabricated foundation. A visible badge overlaid on the preprint, flagging credibility concerns, would provide an early warning and help prevent the spread of false findings.

Signals flags that the reference list includes a problematic article

At Signals, our network-based evaluations flag citations to retracted or problematic work. For example, this publication relies in part on the fraudulent preprint to claim that “top performers may also benefit disproportionately from GenAI”. With the preprint now withdrawn, the authors and readers should be aware that this assertion is weakened. 

More broadly, this approach helps researchers and editors assess the credibility of what they’re reading, and is essential to stopping the spread of fraudulent findings through the scholarly record.

Learn more about Signals and how we can work together to restore trust in research. You can: