The future of research integrity: insights from the audience at Researcher to Reader

Last week, Nicko and I attended the Researcher to Reader conference in London. The event brings together researchers, publishers, and institutions, creating opportunities for interesting conversations and learning. We’re already looking forward to next year’s event. 

During the conference, we gave a talk on ‘The Future of Research Integrity’. We highlighted historic cases of research misconduct, discussed the current state of research integrity, and shared the perspectives of research integrity sleuths, experts, and the Signals team on the future of research integrity. We also asked the audience for their views. 

Our first question to the audience was:

The audience was overwhelmingly pessimistic about the near future of research integrity. Nearly 80% believed the issues would get worse, while 17% thought they would improve. In the following questions, we explored why. 

Throughout the talk, we discussed the challenges posed by the culture of “publish or perish”. Several research integrity sleuths and experts told us that they believe this is one of the root causes of research misconduct, and we agree. If we can change this culture, perhaps this could resolve some integrity issues faced by the industry. 

We asked the audience:

The response was skeptical. A large majority, 86%, believed that current incentive structures would remain unchanged, and that publish or perish would persist.

The audience expects research integrity issues to worsen, and they don’t believe the underlying incentives will shift. So we wanted to know what they are specifically concerned about. We asked them to sum up in a few words:

They provided a range of answers, which we grouped into several key categories:

AI is the main concern, highlighted by 36 members of the audience. This included AI-generated text, images, and data all of which could be used to fabricate articles. One audience member also raised a concern about AI agents, which could be used to automate the tasks of papermills. I asked the audience to raise their hands if they felt that AI could have a positive impact on research integrity, and only 5 people did so.   

Stakeholders not taking responsibility and action and the required cost and resources were other major worries. While publishers are starting to take action, there are concerns that institutions and funders will not get involved or provide the necessary resources to help address the problem. At Signals, we’re beginning to work with institutions and have seen a desire to engage with and improve research integrity. 

Politics (particularly in the US) and increasing distrust in science also concerned the audience. Some fear that this distrust in research and political uncertainty will threaten funding for research and efforts to improve research integrity. Just last week, staff reductions were announced at the National Science Foundation’s Office of the Inspector General, and Office of Research Integrity. 

Other concerns included the tension between fast publication and upholding research integrity, research security, and fake medical guidelines impacting patients. 

The audience’s insights tell us we may be facing a convergence of issues that erode trust in research. Key concerns include an increase in publication fraud powered by technological advancements, polarized politics leading to defunding of research, and under-resourced institutions failing to take action on research integrity. 

The scholarly record is one of our most valuable resources, and as an industry, we must take collective action to safeguard its integrity. That’s why, at Signals, we are committed to working with the entire scholarly community. While we share the concerns of the Researcher to Reader attendees, we believe that together we will address these challenges and restore trust in research.

Learn more about Signals and how we can work together to restore trust in research. You can: