You are here

  1. Home
  2. Training and events
  3. Exploring Generative AI: critical skills and ethical use

Exploring Generative AI: critical skills and ethical use

Student using ChatGPT while studying

How do I attend?

Go to Exploring Generative AI: critical skills and ethical use and select 'Join Room' located at the bottom of the page. The room remains closed until the trainer has entered - this will usually be 15 minutes before the start time.

Benefits

This 45 minute interactive session will explore Generative AI, how it works and its strengths and limitations. It will also offer guidance and tips to help you ensure that your use of Generative AI is appropriate, ethical and transparent, and aligns with the Open University guidance and values.

Please note that this session is not designed to train attendees on how to use specific Generative AI tools – but rather to help develop your critical awareness of Generative AI within the context of your academic study.

Who's it for?

This session is aimed at students, but it could also be beneficial for staff members. 

Learning outcomes

By the end of this session you will be able to:

  • recognise the limitations of Generative AI
  • critically evaluate Generative AI tools and their outputs
  • create effective prompts for Generative AI tools
  • appropriately acknowledge the use of Generative AI in your work

Downloads and further help

The slides and accompanying handout for this session are available to download. 

The following online resources provide additional information and guidance:

Recording

TRAINER: On exploring generative AI, critical skills, and ethical use. My name is Carol Wright, and I'm one of the learning and teaching librarians here at the OU. The slides from this video and an accompanying handout are available to download. You'll find the links to these in the video description on YouTube or on the Exploring Generative AI training Session page on the Open University Library website. You will need to be a current Open University student or a member of staff to be able to download these.

Learning outcomes. By the end of this session, you will be able to recognise the limitations of generative AI. You will be able to critically evaluate generative AI tools and their outputs, create effective prompts for generative AI tools, and appropriately acknowledge the use of generative AI in your work. Artificial intelligence, or AI, in everyday life. What ways might you already interact with the AI in everyday life? Pause the video and think about this for a few minutes.

 

Examples of AI in everyday life. Lots of us have been interacting with AI in various ways for years sometimes without even realising it. Some of the examples we thought of were chat bots, using customer service, shopping, banking, booking a holiday or tech support. Facial recognition, unlocking your phone and in camera apps. Music and media streaming services. So for example, Netflix and Spotify use AI algorithms to recommend shows and music based on your viewing and listening history.

Online shopping, personal recommendations and adverts. Digital assistants, so for example, Amazon Alexa, Google Assistant, and Siri. Sending an email, spell check and spam filters all use AI. Travel and navigation, example, Google Maps. Writing assistance, for example, Grammarly. Health and fitness apps like Fitbit, and social media feeds. AI controls the feeds that you get to see whilst browsing through social media platforms or the notifications that you receive.

 

Generative AI, what is it? Even though AI is not new, you've probably been hearing a lot in the news recently about easy-to-use, publicly available generative AI tools, like ChatGPT and Google Bard. But what is generative AI? It's a type of AI technology that automatically generates or creates content in response to prompts given to it by users. They can generate text, images, music, video, code, and other formats. ChatGPT and Google Bard are examples of text generative AI tools, while DALLE2 is an AI tool that generates images and art. There are more tools being developed all the time as the technology is rapidly developing and evolving.

 

So how does generative AI actually work? Generative AI tools that deal with text are called large language models, or LLMs. They have been trained on vast amounts of data from web pages, social media conversations, and other online content to generate human-like responses to our prompts. And they can continuously update with new data by collecting our previous questions and prompts. LLMs such as ChatGPT work a lot like predictive text, although it's a much more sophisticated and large-scale version.

 

They create content based on recognised patterns-- text. Each response is unique. You will probably never get the exact same reply twice. Responses are conversational. Once you've asked a question, you can just carry on the conversation by adding new questions or prompts, rather than having to redo your search as you would in a search engine like Google. The responses you get are only as good as the input it receives from users. It relies on effective prompts which we will be talking about later.

 

OU guidance on generative AI for students. Some of you may already be using or have thought about using generative AI tools to help with everyday tasks in your personal lives, in the workplace or professional contexts, or perhaps even for your academic work. In response to the growing interest in generative AI tools, the OU has developed guidance to help you to use the technologies appropriately in your studies. It outlines the OU's current position on acceptable and unacceptable use of generative AI and provides guidance on generative AI in assessment, including how to reference and acknowledge the use of generative AI tools.

 

A link to the OU guidance on generative AI for students is available on the Exploring Generative AI training page on the library website. You will need to be a current student or a member of staff to access this. As we mentioned earlier, the technology is continuously evolving. And the University's guidance will also evolve. So we would encourage you to regularly check the guidance to ensure that you're up to date with the OU's current position on the use of generative AI.

 

Using AI ethically and responsibly. Generative AI tools can be effective tools for learning. But to ensure that they are actually enhancing learning, they need to be used in a responsible and ethical way. Assessment and feedback are essential in developing transferable skills such as communication, research, and critical thinking. If you overly rely on generative AI, you run the risk of not developing these skills.

 

Large language models can exhibit bias in their responses, often generating stereotypical or prejudiced content. This is because they are trained on large data sets that may contain biassed information. Now despite safeguards put in place to prevent this, large language models and image generators can sometimes produce sexist, racist, or homophobic content. Be aware of this, and critically evaluate content you create with generative AI.

 

LLMs can sometimes hallucinate or generate information that is incorrect. Always fact-check generated information for accuracy using library search , library databases, or Google Scholar. Be transparent in your use of generative AI, say when and how you've used it. And give credit to the source of any AI generated content in your work. So reference it as you would any other source.

 

Respect privacy and copyright. Do not enter confidential or sensitive information into generative AI tools. So just to reiterate the OU guidance, you must not put any OU materials, including assessment questions into AI tools as this breaches OU copyright. Be aware of what might be considered plagiarism. Do not copy and paste from AI tools and claim it as your own work. And if you're not sure, ask.

 

Limitations. Generative AI has some limitations. There are some things that it struggles with and some things that it just can't do. Sometimes generative AI will produce information that is incorrect. We call these hallucinations. Generative AI tools present information in a convincing way, so it's very easy to be misled by incorrect information. Always fact-check and evaluate any information generated. As I already mentioned, generative AI can be very biassed. It may generate content which is stereotypical, racist, or sexist.

 

Generative AI usually gives very generic answers to questions as it lacks the emotional intelligence and critical thinking skills that you have yourself. It also lacks an understanding of context, which is why effective prompts are really important. If you asked generative AI for a list of references, make sure you check each one as it sometimes makes them up. But again, it is very convincing. AI cannot be used to replace your own critical thinking. These are skills that you need to develop within your studies.

 

Activity, comparing AI and human generated text. One way to gain a better understanding of the limitations of AI is to compare AI and human-generated content. So we asked a human and a generative AI tool, in this case ChatGPT, to perform the following task. Critically analyse and reflect on how you conducted your literature review. I'm going to share the two responses in a moment. And I'd like you to spend a few minutes considering them. Can you identify which response was written by a human?

So here are the two responses. And I'm going to read them both out. Response A. "I created a plan which identified keywords relevant to the topic. I used two databases to conduct my search. I realised I needed to use a wider variety of sources to undertake my literature review, including library search as I missed key studies. I also need to be more meticulous with my search terms and identify keywords from items I find. I applied the PROMPT framework to the literature to identify relevant sources of information."

Response B. "In conducting a literature review, I focused on defining clear objectives and themes, establishing a comprehensive search strategy using relevant keywords and databases. Employing predefined inclusion and exclusion criteria, I systematically screened and selected sources, evaluating their quality and relevance. Data extraction involves synthesising key findings and analysing emerging patterns." Pause the video for a few minutes. Can you identify which response was written by a human?

Response A was written by a human. Now ChatGPT provides a general description of how a literature review might be undertaken. Although it uses the word "I," which is misleading, it is still quite a vague statement. It doesn't give anything really specific. It's factual. The AI generated text also feels quite impersonal, even though it is talking in the first person. The human answer includes factual information but also elements of reflection and analysing different aspects of the topic.

Now this is a very human skill. A generative tool like ChatGPT can not reflect on its own thoughts or apply critical thinking to develop an argument in the same way that a human can. It is also much more specific and gives examples. So this activity has helped to show that generative AI should not be seen as a replacement for things like writing, critical thinking, and evaluation. These are essential skills that you will need to develop throughout your studies and in everyday life.

 

This slide is just a summary of the activity we've just undertaken. Pause the video if you'd like to look at this slide in more detail.

Effective prompts. When you ask a generative tool to perform a task for you, the instructions you give it are called prompts. Developing your prompt writing skills will enable you to use the tools more effectively and efficiently. I mentioned the importance of effective prompts earlier. The AI can only work with the information you give it. So its response will only be as good as your input.

 

So how can you make sure that your prompts are effective? I'm going to talk through some tips for effective prompt writing, followed by a demo to show some of them in action. So first of all, clearly state what you want the AI to do or generate. Specify the format or structure you want the response in. So for example, do you want a paragraph, a list, or a summary? Give the AI relevant context to understand the task better.

 

Write your prompts in natural conversational language. Don't worry about having to think like a machine. Ask your question as if you're having a conversation with the tool. Instead of "yes" or "no" questions, use open-ended questions to encourage detailed responses. Clearly state the tone or style you want the AI to use. So do you want it to be formal, informal, humorous? And provide examples of the tone or style if necessary.

 

If the task is complex, break it down into smaller sub-tasks or steps. Ask the AI to tackle each step individually. And be patient. Be prepared to change or refine your prompts as you may not always get the perfect response first time. Critically evaluate your responses carefully. And we'll talk about this in more detail in a minute. Give feedback to the AI if your response needs improvement. There is usually a thumbs up or a thumbs down to give feedback.

 

I'm now going to give a demonstration of writing effective prompts. So the screen will change as I share my browser. For this exercise, I'm going to use Google Bard. It's one of several generative AI tools available. I want to demonstrate how thinking about your questions and prompts when using AI can make a difference. For this example, I've decided I want to improve my diet and eat more leafy greens and fruit high in vitamin C. And I want some help with planning some meals for the week.

 

So I'm going to do a search in Google Bard and see what it comes up with. I'm going to put some keywords, in a bit like I would if I was just using Google. So there is a search box at the bottom of the screen, which says enter a prompt here. So I'm going to put into the prompt box "meal plan leafy greens." Then to the right hand side of the search box there's an arrow labelled Submit. So we'll press Submit and see what Google Bard comes up with.

 

So if I just scroll down slowly, so you can see the response. What it's given me is seven days worth of meal suggestions for breakfast, lunch, and dinner. Now this is very helpful, but it doesn't tell me enough. How much of the ingredients do I need? What are all the ingredients? How do I make them? And it's also suggested some ingredients I don't like. So for example, I don't like fish. Now Google Bard has done a good job at responding, but I haven't been specific enough for the tool to deliver what I really wanted.

So if I think about that, the first thing I need to tell it is to exclude fish and shellfish. So I'm going to do that now. And we can just add more to the conversation. We don't need to start again like we maybe would with Google. We can just type into the prompt box. Carry on the conversation. "Exclude fish and shellfish," and press the Submit button to the right of the prompt box.

 

So if we just scroll down slowly again. So by doing this, we've immediately solve one problem with the meal plan. And it now excludes ingredients that I don't eat. But what I have noticed is that it's not only excluded fish and shellfish, it's taken meat out completely. So again, we might like to adapt that a little bit to include meat back in. And I still don't really have enough information here to follow the plan in practise. So I'm going to give an example of a better prompt. Now this prompt was provided by Google as an example and has been adapted by me for this demonstration.

 

So into the prompt box at the bottom of the page again. I have put, "I want to eat more leafy greens and fruits with vitamin C. You are a professional nutritionist skilled at planning healthy diets and meals. Provide a set of recommendations for how I can do this. Make sure you provide enough variety so that I can rotate my meals, and exclude fish and shellfish." If we press Submit to the right hand side of the prompt box again.

 

And we'll just scroll down slowly to have a look at the recommendations. OK, so this has given us a lot of information about how to include these foods into my diet. And now I'm going to add the following lines of the conversation to get detailed recipes. So we're just going to carry on the conversation again and enter into the prompt box "Be specific and provide exact ingredient recommendations and quantities." And press that Submit button again.

 

I'll just scroll down slowly just to have a look at the response. You can see that the results displayed would allow you to make these meals. If you didn't know what any particular ingredient was, you could ask Google Bard to clarify. Now some of the reasons that the results have been more helpful for me this time are because in the prompt, I am being clear and specific. I was able to use natural language. I have reviewed and refined my request based on the previous results, so for example, when we asked it to exclude fish and shellfish.

 

I learned from additional searching that unless I specify, it won't give me quantities. And when it does give me quantities, it will give them to me in imperial units unless I ask for them in metric. And I have set a context by giving Bard a role as a nutritionist. So using generative AI can be an iterative process. You keep improving and refining your requests until the tool delivers what you are asking for. What you may find is your initial request is more complex than you imagined.

 

Some tools limit how much information they deliver to you in one entry. And so you may need to break down your request into smaller chunks. So for example, to ensure I get all the information I need for my seven-day meal plan, I may need to do each day separately. So I'm now going to go back to the slides. The screen will change. And we will be back to the slides.

 

Critically evaluate. You will need to use your critical thinking skills throughout your studies, whether it's critically evaluating journal articles or a website. These skills are particularly valuable when using generative AI. Not only do you need to critically evaluate the output created by the AI, but you need to evaluate the tools themselves. So first, think about why you're using generative AI. If you're creating content and passing it off as your own, then this violates the OU's academic integrity policy.

Think about the tool you want to use. What do you know about it? Does it use up-to-date information? And do you know anything about the data it was trained on? Fact-check everything you create with generative AI. Use trusted and reliable sources, such as library search, library databases, or Google Scholar to check every output you create. Do not trust any references created by generative AI. Check that those references actually exist. Remember, tools like ChatGPT give very plausible responses which may be entirely fictitious.

 

Be aware that these tools can create stereotypical or discriminatory content. Keep an eye out for this in your content created. And adjust your prompts or use other resources to combat this. If you'd like to learn more about evaluating resources, there are details on the screen of some library activities that you can work through at any time and visit as many times as you need. These activities are evaluation using prompt, which helps you apply a framework to evaluating resources. And there's also an activity on deciding what to trust online. Links to these activities are available in the YouTube video description and also on the Exploring Generative AI training page.

 

Referencing generative AI. The University expects all submitted work to be your own, and sources should be fully credited. If you use generative AI for your assignments, make sure you reference and acknowledge it. Cite Them Right provides guidance on how to reference generative AI. It covers what information to include in your full reference and in what format. It also includes examples of in-text citations and the reference list.

 

There is a link to the Cite Them Right Harvard guidance in the YouTube video description and on the Exploring Generative AI training page. The OU guidelines also advise that it is good practise to add an appendix to your submission that outlines your use of generative AI, such as the prompts you've used and outputs generated by the AI tool. Remember to check whether your assignment brief or your tutor requires this.

 

Sources of information and guidance. Regularly check the OU guidance on Generative AI for Students, which is available from the Exploring Generative AI training page as it will be updated regularly. If you have any questions on using generative AI in your studies, please contact your student support team, or for postgraduate research students, the Graduate School. Also check your module assessment information and your tutor.

 

For help with referencing generative AI, the Cite Them Right Harvard guidance for Generative AI is available from the Exploring Generative AI training page. You can also contact the Library Helpdesk. Contact details for the Library Helpdesk are available on every page of the library website. Learning outcomes revisited. You will now be able to recognise the limitations of generative AI, critically evaluate generative tools and their outputs, create effective prompts for generative AI tools, and appropriately acknowledge the use of generative AI in your work. Thank you for watching.

 

Upcoming Online Training Sessions

Feb 12


Wed, 12/02/2025 - 19:30

Mar 24


Mon, 24/03/2025 - 19:30