Integrating AI image checks as part of the workflow can preserve integrity. Dr Dror Kolodkin-Gal, founder of image integrity software Proofig AI, explores how a proactive approach prior to publishing can help preserve the reputation of academic institutions.
Preserving research integrity in manuscripts is an urgent issue for universities. One of the latest ways to try and address this is with a ‘bug bounty’ project, which detects errors in scientific literature1 by paying researchers to spot them. However, by this point the mistake is already published and can be costly. Another way of preserving integrity in science is by integrating checks into a researcher’s workflow — such as with the use of AI image tools.
Protecting the integrity of the scientific literature is of the utmost importance. While a lot of focus has been put on plagiarism detection tools, the focus on tools that identify image falsification and duplication has traditionally been found wanting. However — as with plagiarism — image errors, in many cases unwitting, can lead to the withdrawal of manuscripts, which not only affects the reputation of the author but can affect the institution itself.
In the UK, the leading universities have taken many years to build up their reputations. Three of the country’s institutions rank among the top 20 academic institutions in the world — the University of Oxford, the University of Cambridge and University College London2. The respect such institutions command for their work is noteworthy, but likewise any unwitting errors in high-profile research have the potential for reputational damage.
AI tools can help spot errors in images or manuscripts before they are submitted to journals, mitigating the risk of mistaken figures being unwittingly published. These tools work by flagging images to a researcher for further investigation. For example, Proofig AI can be built into a researcher’s workflow to ensure that images are checked well ahead of submission. Such tools can even detect whether an image is AI generated, spotting falsified images in the work of others as well as identifying your own honest mistakes.
While projects that try to preserve integrity are welcome, spotting errors in published research using ‘bug bounties’ may be too late in the process. Building mechanisms into the research workflow to spot errors is a straightforward step that universities can take to protect their hard-earned reputations.
Challenge of AI
In its 2023 statement, the UK Research Integrity Office acknowledged that AI use amongst researchers has increased since the introduction of generative AI models3. One study published by University College London suggests at least one per cent of all papers published in 2023 were written at least partially by AI4.
The rapid pace of AI development requires researchers to maintain awareness of the technology as a research tool — and a research area in its own right. While acknowledging the challenges of keeping pace with AI, including detecting AI-produced text and imagery, the report stated such tools can enhance research processes and indicated that there is greater scope for the use of AI in assessing research integrity.
REFERENCEs
https://www.nature.com/articles/d41586-024-01465-y
https://www.mastersportal.com/rankings/2/academic-ranking-of-world-universities-shanghai-jiao-tong-university.html
https://ukcori.org/our-work/annual-statement-2023/
https://arxiv.org/abs/2403.16887