Authors: Julia Molinari (Graduate School); Daniel Gooch (Computing); Nicoleta Tipi (FBL)
Critical readers: Inma Alvarez (WELS)
Approved by: Clare Warren (Director of the Graduate School); Michel Wermelinger
Updated: Wednesday, 21 May 2025. Version 2.0.
This document can also be downloaded as a Word document: Position Statement and Guidance on Generative AI and Doctoral Education.
This Position Statement and Guidance (henceforth Guidance) on the use of Generative Artificial Intelligence (GenAI) in doctoral education (see 2 for ‘doctoral writing’) is aimed at doctoral supervisors, researchers, and examiners (see 4).
A ‘Position Statement and Guidance’ rather than ‘Policy’ captures the need for an open, agile, responsible, and pro-active educational stance to GenAI that remains:
a) responsive to fast-moving changes in technology, society, and politics;
b) ready to be re-evaluated in light of specific contexts, intentions, and uses.
The aim of this Guidance is to clarify the Graduate School’s principled, value-driven, and evidence-based approach to potential uses and abuses (see 5) of GenAI as they relate to the educational quality and integrity of the doctorate and doctoral writing (see 2).
Doctoral writing is an active, reflective, and reflexive process which enables researchers to think through and make connections between ideas and then decide which media (e.g., language, image, sound) is best-suited to communicate new and truthful knowledge. This makes doctoral writing a method of enquiry and not simply a text that can be outsourced to others, including machines. As a method, doctoral writing assumes a human author capable of intellectual oversight. Such oversight includes reader awareness, being accountable and taking responsibility for the truthfulness, accuracy, and source of claims and for the ethical and environmental implications of the research, including impact, harms, and benefits arising from the use of GenAI.
Doctoral-level writing distinguishes itself from academic writing at other levels (e.g., Undergraduate or Master’s) because it is sustained over time, relies on ongoing feedback, close collaboration, trust, and communication with supervisors and on nurturing research cultures conducive to developing researcher identities. Doctoral writing is, therefore, also a sustained process of socialisation whereby new identities are crafted through awareness of research communities, knowledge ecosystems, conventions, and expectations. It is this process that enables doctoral writers to make significant, rigorous, original and ethical contributions to knowledge and/or practice. Deferring this process to GenAI potentially undermines such contributions.
Generative AI refers to a subset of artificial intelligence (AI) technologies which use statistical models to produce content (e.g., text, images, sounds) based on user input. In the case of text, such models are called Large Language Models (LLMs). Readily available stand-alone free and subscription-based GenAI tools currently include ChatGPT, Claude and Gemini. The Open University has an enterprise license for CoPilot which requires you to login using your OUCU. Some students may have access to Grammarly, which also uses GenAI. Literature search tools such as Research Rabbit and Scite AI are also present, however some may consider these underdeveloped. All of these are commercially owned applications whose performances, training data, and outputs vary in accuracy, quality, and relevance.
Norms around best practices in using GenAI tools for doctoral research will take time to establish because of issues relating to confidentiality, copyright, data protection, and other elements of research integrity that vary across disciplines and individual projects. Currently, GenAI tool development is moving quickly, and with CoPilot being integrated into Microsoft Word, at least some use of GenAI in writing will soon be unavoidable.
In light of the complexity detailed below, this Guidance advances principles for good practice rather than fixed policies aimed at being punitive. It functions as a ‘living document’ that remains responsive to change whilst at the same time highlighting potential uses and abuses of GenAI technology (see 5). GenAI is increasingly becoming a useful tool that can assist researchers in the process of academic reading and writing, of mapping systematic literature reviews, and of generating original texts. GenAI apps are also available to everybody free of charge and via paid subscriptions. Such ubiquity, coupled with GenAI’s default integration into everyday technologies, makes it currently impossible to regulate via bans or detection software. Moreover, since GenAI performs convenient tasks, avoiding its use or demarcating clear boundaries between acceptable and unacceptable use is currently unrealistic.
Since the quality, integrity, training transparency, environmental impact, and accessibility of GenAI apps vary considerably, their educational value and impact on knowledge is likely to be unprecedented. It is therefore incumbent on doctoral supervisors, researchers, and examiners to take joint action and responsibility in upholding the integrity of the doctorate as an original, rigorous, and significant contribution to knowledge that assumes human authorship in its intent, even when such authorship is aided, enhanced or extended by GenAI technologies.
Crucially, although these tools produce fluent writing as an output to a human request (e.g., write me an essay on helicopters), they have no understanding of the texts they produce and can therefore not be held accountable for the truth of their claims (see 2). What they do is mimic human texts, with the output based on a statistical model of commonly used words. This results in the tools producing output which can sound knowledgeable despite being prone to inaccuracies. Overall, GenAI remains a complex technological and political artefact prone to bias and unaccountability in its training data, inaccurate outputs, and environmental impacts that are part of a wider and fast-evolving socio-technical concerns.
To find out more about how these tools work and how to make effective use of them, JISC offer some training material: https://nationalcentreforai.jiscinvolve.org/wp/2024/08/14/generative-ai-.... A range of OU training material will gradually be integrated into mandatory supervisor training, the PACE programme, and other core Graduate School training.
The Graduate School’s core position is that doctoral researchers are responsible and accountable for authoring their thesis. The Graduate School also recognises that appropriate uses of GenAI tools will vary throughout different stages of doctoral research and across disciplines. Any use of GenAI during the doctorate must, therefore, be commensurate with use that is appropriate to the discipline, stage, and task undertaken. Such use further demands that doctoral writers retain authorship and intellectual oversight at all stages of their evolving and final thesis (see 2). Any use of GenAI must be jointly agreed between doctoral researchers and supervisors and acknowledged, as exemplified in 7. If in doubt, please refer to 8.
The key principles, values, and expectations of GenAI use for doctoral researchers, supervisors, and examiners are listed below.
Work submitted for feedback or examination must be the result of students’ own intellectual work (see 2). Students are responsible and accountable for the truthfulness of their research and for the authorship of the text submitted in the thesis. Using technologies to proofread, spell-check, or redact text does not undermine authorship.
As with any software, a student’s use of GenAI must align with the University's expectations for academic integrity and enable them to meet the criteria of a doctoral qualification, including ethical considerations and authorship.
Any use of GenAI tools during the doctorate included or referred to in formal examinations (e.g., upgrade report, summative assignments, or thesis) must be accompanied by critical analysis and oversight. Pasting output from a GenAI tool without critical editing, fact-checking, and revision is not reasonable use.
Any use of GenAI tools must be declared within the thesis, including its use as an interlocutor to generate and brainstorm ideas. Templates for declaring a variety of uses are listed in 7.
Students should familiarise themselves with the expectations of their professional bodies and publishing organisations (e.g. ACM; APA, Elsevier); funding organisations (e.g. UKRI), and any faculty or departmental norms through discussion with supervisors.
Supervisors need to keep themselves informed on developments around GenAI and doctoral research such that they are more likely to be able to engage in the kinds of open, trusting, and responsive conversations referred to above (see 4.3.5; see 9).
Since academics and authors also increasingly rely on GenAI to analyse data, generate outlines, summarise articles, re-purpose, edit, and re-write content, in the spirit of an open, fair, transparent, democratic, and progressive education, supervisors should not make unfair, unreasonable, and unsubstantiated assumptions about their doctoral students’ reasons and intentions for using GenAI.
There should be ongoing open, trusting, non-judgemental, transparent, and informed conversations between supervisors and doctoral researchers regarding the potential benefits and risks of using GenAI. These might arise from concerns about appropriate disciplinary and task-specific use to new and creative applications.
Doctoral researchers and supervisors should be aware that some uses of GenAI tools may make the thesis more difficult to defend in the viva, even if they are permitted under this Guidance. Doctoral researchers must demonstrate intellectual ownership of and defend their work throughout the doctoral journey and final thesis. This entails that any use of GenAI that undermines this requirement should be discouraged. For example, deferring understandings of theory, historical controversies, competing scientific interpretations, or scholarly literatures to the words of GenAI without further investigation, critical engagement, and intellectual oversight undermines the integrity of the doctorate and is likely to be exposed by critical and probing viva examiners.
Approaches to the use of GenAI agreed between doctoral researchers and supervisors must be documented in supervision meeting minutes so that it can be revised in light of ongoing needs, concerns, technological developments, and changes in institutional policies. This will ensure an accurate record of GenAI use is reflected in the ‘Declaration of use’ for the upgrade and final thesis (see 7).
Supervisors and doctoral researchers should both be familiar with the OU’s current GenAI provision, such as the Data and AI Hub (see 9). At the time of writing, the OU offers an enterprise version of CoPilot of the model, but should nevertheless be approached judiciously (see 5.2.1). This green shield will be visible in the toolbar if using the approved version:
Until it is clear what specific training is required, students and supervisors are responsible for remaining sufficiently informed of developments in GenAI by attending relevant higher education and doctoral events and/or signing up to regular updates (see 9).
Examiners should continue to examine theses and conduct oral examinations consistent with the OU’s current assessment procedures and policies. They must not upload any part of a doctoral thesis into a GenAI tool. In the absence of any overwhelming and unambiguous evidence of academic misconduct involving GenAI tools, examiners are asked to assess the doctoral thesis as it is submitted.
Examiners should declare any GenAI use of their own.
Examiners should raise any concerns regarding the use of GenAI with the exam panel chair for further guidance.
There are several ways to use GenAI tools in research, too many for a prescriptive list of what is and is not allowed. The following sections outline suggestions for suitable and sustainable practices and for what constitutes malpractice. In all cases, acceptable use of GenAI should be guided by the underlying principle that it is the human researcher who remains accountable for their research and research outputs by ensuring intellectual and ethical insight and oversight, originality, and critical reflection. These must not be deferred, delegated, or relinquished to GenAI (see 2). Students and supervisors do not need to consult about ‘good use’ in the list below but should consult when in doubt and in relation to any abuses and malpractice.
GenAI tools provide diverse opportunities to support authors. Tools can be used to assist in brainstorming, generating ideas, providing an initial outline, proofreading, generating alt-text for images and figures, or as a critical thought companion. Any use of the GenAI tool must be discussed with supervisors and documented (see 7). Overall, the human researcher remains responsible for authoring their doctoral text. For example, if using a tool like CoPilot for proofreading, the output text must be reviewed to ensure the intellectual meaning, intention, and accuracy of the text have not changed.
Similarly, GenAI tools can be useful for producing figures or data visualisations. In discussion between the doctoral researcher and their supervisory team, this may be acceptable, as long as the underlying data is: a) robustly collected; b) the figure accurately reflects the underlying data or idea; and c) the way in which the GenAI tool was used has been declared.
At the time of writing, using GenAI tools for analysing and drawing insights from data could be challenging, and will differ dramatically across disciplines. As a minimum standard, any such usage, and in discussion with the supervisory team, should be justified like any other element of research design and described in an accurate and detailed manner in a “methods” section, passed through ethical review (where relevant), and used in a way that is appropriate to disciplinary norms and research integrity. Examiners will assess such aspects of the research design as they would any other element of a project.
Overall, what matters is that use of GenAI tools is documented throughout the doctoral process, including in the ethics application, supervision notes, progress reports, upgrade reports, and the final thesis.
The overriding principle in avoiding abuses and malpractice of GenAI tools is to ensure that research is conducted with integrity. Further information on research integrity at the OU can be found here: Reseach Integrity.
Below is a non-exhaustive list of the primary risks of using GenAI tools which the OU would consider as abuse or malpractice in the context of doctoral research.
Many GenAI tools store user inputs and interactions as training data to improve performance. Important data that should not be entered into such tools, include (but are not limited to):
If you want to use GenAI to support interview transcription, you should use an institutionally approved tool (such as your enterprise CoPilot account, or MS Teams). Your participants should have consented to transcription being completed through GenAI, and should be part of your ethics agreement. You should also be aware that GenAI transcription could lead to misinterpretation and oversimplification of cultural and contextual nuances.
The risks of using GenAI in doctoral research include but are not limited to:
It is important that GenAI outputs are not accepted uncritically. While LLMs can produce compelling text, they are not committed to the truth even though they may produce claims that are true. Ultimately, if a student cannot defend, explain, critically evaluate, and take responsibility for the argument of their thesis, they have acted inappropriately. To avoid this, students must question, triangulate, and critically assess their use of GenAI technology.
For example, relying on GenAI tools to produce a literature review is misconduct. Even if the outputs were reworded and/or re-organised to such an extent that they could be considered a student’s own, this would still constitute malpractice likely to be noticed during an oral examination (such as the upgrade and final thesis viva). This is because the purpose of most literature reviews at doctoral level is to demonstrate an understanding of research communities, traditions, histories, theoretical connections, and conventions as they relate to a student’s specific concerns, research contexts and questions, methodology, and methods. At doctoral literature review is not a list of who has written what on a particular topic, which GenAI may or may not accurately reproduce depending on what data it has been trained on. Rather, a literature review is the demonstration that a student is knowledgeable of the state of the art in their disciplinary area as it relates to their research project. Since GenAI can also invent non-existing literatures (see 5.2.2), students risk compromising the rigour, validity, and reliability of their doctoral qualification when they rely uncritically on GenAI outputs.
Further unsuitable uses of GenAI which compromise truth include using it as a search engine to generate results, data, citations or scholarship (unless the student has an exemption due to researching GenAI itself, see 6).
If the research project is likely to involve ethical issues around the use of GenAI, HREC or SRPP discussion and approval is required (see Research at the OU).
The OU does not authorise the use of tools that purport to detect AI use. This is because there is no compelling evidence that such tools are always effective. Therefore, doctoral theses should not be uploaded to any such tools.
Given the wide scope of research at the OU, it is anticipated that some research projects may directly focus on GenAI itself (for example, developing new tools to minimise bias, or exploring the impact of GenAI tools on the legal profession). In such circumstances, there may be exemptions to the specific uses and abuses laid out in this document, although the key principles and values still stand. If in doubt, contact your Faculty Lead PGRT/DRD in the first instance.
All acknowledgements of GenAI tools use should include the following basic information and be recorded in supervisory notes and declaration of use at thesis submission:
It may be appropriate, depending on use, to detail the prompts provided and how the outputs were processed. Copies of these should be kept as part of the research methods resources. Examples of ‘declaration of use’ can be found below.
Non-exhaustive examples of ‘Declarations on Generative AI’ could include:
(1) GenAI tools have been used in the development of this thesis. ChatGPT 3.5 was used in January 2025 to assist in the proofreading of section 4.5. I provided the section text alongside the prompt “please improve the clarity of the following text”. The output was checked to ensure that the output retained the intellectual argument I had constructed, but improved the presentation of the text. The use of GenAI has been discussed and documented in the supervision meeting of December 2024.
(2) CoPilot was used in September 2024 to develop the data visualisations in figure 2, 5, and 7. In each case, the data provided to the tool was not under copyright or breaching other data protection issues. The figures were reviewed and validated for relevance, appropriateness, and accuracy before incorporation into the final manuscript to maintain the scholarly integrity of this research. The use of GenAI has been discussed and documented in the supervision meeting of December 2024.
(3) During the preparation of this work, the author(s) used ChatGPT, Grammarly in order to: Grammar and spelling check, Paraphrase and reword. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the publication’s content (source: https://ceur-ws.org/GenAI/Policy.html).
(4) The author has used GitHub Copilot to write, test and debug the Python scripts developed to analyse the data in this work. CoPilot was configured to prevent matching with publicly available code. All of the generated code was read and checked by the author, who takes full responsibility for its working. This was discussed with the Faculty lead PGRT and documented in the progress report of February 2025.
(5) The author has used the OU version of CoPilot to upload their thesis and generate example mock viva questions as part of their preparation for viva.
GenAI is changing the way research is conducted in many fields. It is therefore understandable and reasonable to doubt its use and reliability in research.
Therefore, when doctoral researchers are in any doubt, they should raise questions and concerns with their supervisors during supervision meetings and/or by correspondence.
If, after speaking with supervisors, doubts and unresolved questions remain about the appropriateness of the proposed use of GenAI, in the first instance the Faculty Lead (PGR Tutor, Director or Convenor) should be contacted to ensure concerns are discussed in a climate of trust, genuine enquiry, and exploration conducive to sustainable and ethical good research practice. If further conversations become necessary, the Graduate School can be contacted via PACE Lecturer Julia Molinari at julia.molinari@open.ac.uk. Please also cc. daniel.gooch@open.ac.uk and nicoleta.tipi@open.ac.uk in any correspondence.
Existing OU information on AI has informed this Guidance. This is listed below.
Several (social) media sources have informed this Guidance. These are listed below.
The following selected references have significantly informed this Guidance. Relevant updates will be made as further evidence is published.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada. https://doi.org/10.1145/3442188.3445922
Bouchard, J. (2024). ChatGPT and the separation between knowledge and knower. Education and Information Technologies. https://doi.org/10.1007/s10639-024-13249-y
Carrigan, M. (2024). Generative AI for Academics. Sage Publications Ltd.
Gallagher, J. R. (2020). The Ethics of Writing for Algorithmic Audiences. Computers and Composition, 57, 102583. https://doi.org/https://doi.org/10.1016/j.compcom.2020.102583
Kamler, B., & Thomson, P. (2006). Helping doctoral students write : pedagogies for supervision. Routledge. Table of contents only http://www.loc.gov/catdir/toc/ecip0610/2006007398.html
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819.
Luccioni, A., Lacoste, A., & Schmidt, V. (2020, June 2020). Estimating Carbon Emissions of Artificial Intelligence [Opinion]. IEEE Technology and Society Magazine, 39(2), 48-51. https://doi.org/10.1109/MTS.2020.2991496
McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol University Press.
Ou, A. W., Khuder, B., Franzetti, S., & Negretti, R. (2024). Conceptualising and cultivating Critical GAI Literacy in doctoral academic writing. Journal of Second Language Writing, 66, 101156. https://doi.org/https://doi.org/10.1016/j.jslw.2024.101156
Paul, D., Trisha, G., Yosra Magdi, M., & Jessica, M. (2025). How to read a paper involving artificial intelligence (AI). BMJ Medicine, 4(1), e001394. https://doi.org/10.1136/bmjmed-2025-001394
Richardson, L., & St. Pierre, E. A. (2005). Writing: A Method of Inquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research, 3rd ed. (pp. 959-978). Sage Publications Ltd.
Shah, C., & Bender, E. M. (2022). Situating Search Proceedings of the 2022 Conference on Human Information Interaction and Retrieval, Regensburg, Germany. https://doi.org/10.1145/3498366.3505816
Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press. https://doi.org/10.1093/oso/9780197759066.001.0001
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26. https://doi.org/10.1007/s40979-023-00146-z
Please get in touch for research-degree-related issues by phoning 01908 653806 or sending an email.
See further contact options and a Who's who in PG research.