Guidance on responsible use of AI in research
- RRI and the use of AI in research
- Practical steps for responsible AI use in research
- Additional resources
This document provides general guidance about the application of Responsible Research and Innovation (RRI) principles to the use of Artificial Intelligence (AI) in research. The UK government defines AI as ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence.’ These apply to a range of analytical and translational activities.
The guidance is primarily intended to support researchers using AI tools as part of their research workflows. It is not intended to replace or duplicate existing governance for research where the development of AI technologies is itself the main research activity. However, the broad principles can be applied to research involving the use of AI, research that develops AI, and research into AI.
Use of AI does not diminish researcher responsibility. You are accountable for all outputs, decisions, and consequences arising from AI use in your research.
RRI and the use of AI in Research
AI is now a permanent feature of academic life, with both benefits and challenges for researchers. Nevertheless, researchers remain fully accountable for the integrity, accuracy and originality of their work, regardless of the extent to which AI was used to create it. This means that AI cannot replace critical thinking, ethical judgment, or compliance with institutional and legal standards. Any AI-related misuse or breach, whether intentional or accidental, must be reported and handled in line with applicable policies and procedures.
The purpose of this guidance is to help researchers consider how the RRI principles might be applied to Anticipate risks, Reflect on implications, Engage with relevant stakeholders and Act responsibly when employing AI in their research (RRI AREA).
It is important to recognise AI’s potential benefits when used responsibly. AI technologies can enhance research processes and outputs in several ways:
- Accelerate research workflows by streamlining and deepening literature reviews, file‑based research, generating alternative perspectives, suggesting methods of data analysis, and hypothesis generation.
- Supporting AI-assisted scientific discovery and experimentation, e.g. Google Research’s Co-Scientist and Stanford’s Virtual Lab.
- Improve pattern recognition across complex datasets and disciplines, identify references to enable the verification of data and other quality checks. Support academic writing in various styles, helping to edit content for specialist and non-specialist audiences, and reducing language barriers - an important Equality, Diversity, and Inclusion (EDI) benefit.
- Help translate research into impact and reaching wider audiences.
Despite potential benefits, AI also poses challenges to the reliability of the research record, which is fundamental to Research Integrity. Potential problems include the following:
2.1 Accuracy & Reliability
- AI may generate inaccurate, misleading, or fabricated content (e.g., hallucinated citations, fabricated results) undermining the integrity of the research record. You should therefore exercise caution and verify the outputs and information sources
- Overestimating AI accuracy, do not apply an AI model beyond the task it was trained for without reassessing error risks.
- Overreliance on AI may reduce critical thinking and analytical engagement in research, weakening methodological rigour.
- Poor-quality datasets or papers may be produced using AI, affecting credibility and reputation.
2.2 Transparency & Reproducibility
- Black-box nature of AI makes it hard to trace its decision-making processes
- Lack of version control and changing model behaviour may undermine reproducibility in research.
- Lack of disclosure when using AI tools. You need to follow institutional, funder, and publisher requirements.
2.3 Accountability & Governance
- Unclear authorship and methodology may compromise scientific integrity
- Legal frameworks related to AI advancements are still developing, creating regulatory gaps.
- Use of AI in sensitive areas (e.g., courts, consultancy, in scope of export controls) may breach terms or lead to reputational/legal consequences.
2.4 Intellectual Property
- Inputting third‑party or protected content into AI tools may breach IP policies and licensing conditions, e.g. Queen Mary Intellectual Property (IP) policy.
- AI providers may claim ownership over both the inputs and resulting AI outputs. This could create serious issues when researchers wish to publish or patent their work.
- Breaches of confidentiality or reuse licences may result in legal, contractual, or reputational consequences.
2.5 Data Privacy and security
AI tools, particularly large language models, may retain and learn from user inputs. This may contravene institutional and legal requirements. Therefore:
- Adhere to Queen Mary Data Protection Policy and Queen Mary Information/Data Governance Policy.
- Do not input identifiable, personal, confidential, or IP‑sensitive data into AI tools unless explicitly permitted by data owner. Ensure that using AI tools does not infringe Queen Mary IP or any third-party IP or other rights
- Use only institutionally approved AI software and tools for confidential, identifiable, or unpublished research and data.
- Review AI tools privacy policies and opt out of model‑training where possible.
- Researchers will likely find themselves responsible for the safety and security of any new AI tools produced by their work.
- Consider any vulnerabilities in AI systems and if they could be explored by bad actors.
2.6 Use of AI tools in peer-review
If using AI tools for peer-review, be mindful that that the research is not yet in the public domain. Therefore, consider the following:
- Ownership of materials input into AI tool.
- Possible breach of peer reviewer confidentiality agreement.
- Inadvertent release of research into public domain, thus limiting researchers’ ability to publish it.
- As a default, researchers should assume that using external AI tools for peer review is not permitted unless explicitly allowed by the journal, conference, or funder.
When research involves human participants or their data, additional obligations apply, particularly around informed consent, privacy, and secure storage:
- If using human data (identifiable or de-identified), consider whether participants were informed if and how AI tools will process the data, whether any inputs/outputs will be retained by the tool/provider, and any implications for model training or secondary use. This should be included in the participant information sheet and consent form.
- Assess and mitigate the risks of data linkage and potential re‑identification introduced by AI tools.
- Evaluate whether AI tools could introduce or amplify bias, and whether this may impact participants, their rights, or the fairness of study procedures, analyses, or decisions.
- Consider appropriate storage and data protection requirements (see point 3.5 Data Privacy and security in 3 Research Integrity above).
When collaborating with external partners on research involving AI, it is important to ensure that partnerships are conducted ethically, in a fair and equitable manner, while adhering to legal and regulatory requirements. Consider the following:
- Ensure transparency about how AI tools are used within collaborations.
- Be aware of differences in institutional, national, or international policies and legal frameworks related to AI use.
- Comply with any relevant regulations, including export controls, National Security and Investment Act.
Generative AI systems can require substantial energy and computing resources. Depending on data-centre design, location, and cooling technology, this may involve significant electricity use and water consumption. Training AI models can also produce hundreds of tonnes of CO2. Therefore, we should be mindful of the possible environmental implications and take the following steps:
- Before using AI, consider if it is necessary, and the most efficient tool for your task.
- Where possible, consider using AI tools and services that come from providers with strong sustainability commitments.
- Limit unnecessary runs, use smaller models when feasible, and batch tasks to reduce computational load.
- Consider including sustainability considerations in your research planning and reporting.
Carefully consider the social responsibilities and implications of using AI in research. This relates broadly to the social consequences of research and the possible benefits and challenges. AI systems may embed or amplify biases present in their training data. If these biases are reproduced in research outputs, they could lead to misleading or harmful conclusions when generalized. Consider the following:
- Whether AI use promotes fairness and accessibility without introducing new inequities.
- Use bias detection tools or review outputs critically to identify and mitigate potential bias in AI-generated content.
- Document any steps taken to address bias or social impact.
Researchers might also use AI for the purposes of grant applications or manuscript writing. However, they are expected by the University to familiarise themselves with the relevant policies of the funders and publishers. For example, UK Research and Innovation (UKRI) has its own policy on the use of generative AI in its grant application and review process. This places the responsibility for the integrity of AI-generated outputs on the applicant.
Practical Steps for Responsible AI Use in Research
Checklist
- Have you checked funder/publisher guidance?
- Are you using an institutionally approved tool?
- Are you inputting any personal, confidential, or IP-sensitive data?
- Have you verified AI-generated content?
- Should AI use be disclosed?
Additional resources
Queen Mary resources
- The Doctoral Training College provides dedicated AI guidance to doctoral students
- The Queen Mary Academy has published AI guidance for staff
- Queen Mary policies:
- Queen Mary AI FAQ
- AI software list provides tools that have been approved for use at Queen Mary and what kind of data you can share with them.
- Library Services at Queen Mary provides guidance for the appropriate referencing of AI.
External resources
- The UK Research Integrity Office (UKRIO) has published this guidance to help researchers navigate the challenges of AI
- The NIST AI Risk Framework (National Institute of Standard Technology)