Skip to main content
Joint Research Management Office

Interim Statement on Artificial Intelligence (AI) and Research Integrity at Queen Mary

Where does Queen Mary stand on AI and Research Integrity?

Artificial Intelligence (AI) has partly emerged from research and is now a permanent feature of academic life. 

When it comes to producing trustworthy, rigorous and transparent research, AI presents both opportunities and challenges.  The University recognises that AI will be increasingly used by researchers, but the research sector is still in the process of developing appropriate policies and guidelines.  Therefore, we encourage researchers to exercise due care and critical thinking in their use of AI, particularly generative AI, during this transitional period.

What is generative AI?

Generative AI refers to AI tools that can be used to produce new content using deep learning models.  Examples include ChatGPT, chatbots and virtual assistants. 

How can AI benefit Research Integrity?

It is widely recognised that AI can be beneficial to research and the writing process.  For example, the use of AI tools might enable the verification of data and other quality checks on an unprecedented scale.  There is also a potential equality, diversity, and inclusion (EDI) benefit in that researchers who are not native English speakers might find that the use of AI writing tools improves their linguistic ability.

What are the potential pitfalls of using AI in research?                     

While AI tools can improve writing quality, researchers should be mindful that AI is not itself an author; it does not constitute a legal entity.

Researchers should be aware of the limits of AI tools. AI tools operate according to probability but they cannot produce original work. Information derived from AI is also not necessarily reliable, given that AI tools are often unable to distinguish between reliable and unreliable sources.

There is a view that cultural biases are embedded in AI vectors, and we know it is possible for AI to fabricate information. It is therefore the responsibility of researchers to ensure the accuracy of the work they produce, even when assisted by AI. 

Researchers should also consider privacy concerns about the use of AI tools. It is not always clear how the data used for prompts is subsequently stored and used.   

Using AI responsibly in research

It is vital that researchers can distinguish between their own work and that of AI, taking responsibility for all the interpretations and conclusions produced. 

When using AI, transparency is key.  Any written work arising from a study should include a statement explaining how AI has been used and for what purpose.  The following broad approach to referencing the use of AI is advisable:

  • Name the AI platform used. One example is ChatGPT
  • Maintain a record of prompts and responses, even if they are not subsequently used in written outputs

Funding applications

Researchers might also use AI in drafting grant applications.  However, they are still expected by the University to familiarise themselves with the relevant policies of the relevant funder.  For example, UK Research and Innovation (UKRI) has its own policy on the use of generative AI in its grant application and review process, which places the responsibility for the integrity of AI-generated outputs on the applicant. 

The UKRIO has additional resources about AI in research on its website.

 

As this interim statement relates to an evolving situation, it will be reviewed at appropriate intervals.

Back to top