Home Page
cover of Who's A(i)fraid of the Big Bad Wolf
Who's A(i)fraid of the Big Bad Wolf

Who's A(i)fraid of the Big Bad Wolf

Fionn McGrath

0 followers

00:00-28:52

Nothing to say, yet

0
Plays
0
Downloads
0
Shares

Audio hosting, extended storage and much more

AI Mastering

Transcription

Generative AI is a technology that allows machines to produce textual, visual, and even video outputs based on natural language requests. It has become more accessible and widespread, and it is changing the landscape of higher education. There are various generative AI tools available, such as ChatGPT, Microsoft Copilot, SORA, DALI, CLAWS, and LAM. These tools are constantly evolving and improving, but they also come with challenges. There are concerns about bias and the potential for hallucinations in the outputs generated by these systems. It is important to critically evaluate the reliability and suitability of the outputs and not rely solely on AI tools. Generative AI is different from platforms like Google, as it can quickly create content that may appear reasonable, but may contain obscure or questionable references. It is a complex and evolving technology that requires careful consideration and evaluation. Welcome to the next installment of our Give Me About 10 Minutes On podcast series from the Academic Integrity Unit. This podcast today, this episode today is entitled I'm Not Afraid of the Big Bad Wolf. And during this discussion, we're joined by all members of the Academic Integrity Unit, where we'll discuss all things to do with generative AI at the University of Limerick, the significance of gen AI for higher education for both staff and students. And of course, we'll reference make reference to a lot of resources that we have developed as a team. And we'll signpost these throughout our discussion today. So I'm joined by Dr. Sylvia Bonini and Dr. Fionn McGraw, who are educational developers of the Academic Integrity Unit. And my name is Dr. Marie-Claire Kennedy. I am Academic Integrity Lead within the Academic Integrity Unit. So guys, thank you very much for joining me today. Just a general question, I suppose, to start with, and I don't know if there's really a true and fixed answer to this, but maybe it's more of a discussion point to start with about what is generative AI and what does it mean for us in higher education? I think when you're talking about generative AI, it kind of emerged, I mean, typically it's kind of associated with large language models. It kind of came on everyone's radar, literally everyone's radar almost overnight with the release of the first publicly available chat GPT. And it's just grown since. But it's now its capacity, I suppose, to, in natural language, request the machine to produce an output. And that output can now be textual. It can be visual. Increasingly, it's going to be video. It can code. So what it really is, what it boils down to is it's a means of engaging in natural language with the machine for an output. So getting the computer to actually produce something is now extraordinarily simple, whereas it was extraordinarily complex and very technical beforehand. And it's a case of how that, as a piece of technology, is going to transform not just the institution we're in, the university, but all institutions. That's going to be the big, big question. There's some hyperbole about it, potentially. Some people are talking about almost P-Doom scores and AGI. And then there's some people who are kind of putting their head in the sand, almost like an ostrich, and kind of saying it's not going to happen. So it'll probably be somewhere in the middle. And that's kind of what we're trying to navigate now, specifically in our unit, from the perspective of academic integrity. So I think there's always been some element of artificial intelligence in our lives, whether we think about chatbots when we're trying to maybe go into a website or internet banking or something like that, that we have been exposed to this. But maybe it's more widespread and more accessible now than ever before. And it feels, I suppose, like it's arrived very quickly on the shores of higher education. And it has caused students and staff members, like academics, to really challenge them to think about what this means for them now within their roles. Fiona or Sylvia, if you could maybe outline to me some of the kind of generative AI tools that we know people are using, so ChatGPT being the obvious one, Microsoft Copilot. Are there others that are out there that we know are being used? We'll talk about what we permit to be used in a moment in the university. But what else is out there? Well, we've discussed with the students in smaller workshops, and the vast majority of students go towards ChatGPT. That seems to be the standard. So there's a whole plethora of different large language models. And then there's different generative AIs as well. So it's not just the large language models. So you will have SORA, which will be video-based. You'll have DALI, which is image-based. These are open AIs ones. You have CLAWS. You have Copilot, the Microsoft one built on the open AI engine. And increasingly, you've got LAM, which is the Facebook one, which is probably bigger or less familiar with, but it's going to be in a lot of the more bespoke options. But it's going to be that. You're going to see a proliferation of a whole series of new AI tools, some of which will be based on those large language models, some of which will be designed on kind of more bespoke models. But if you go onto the website, there's an AI for that. Oh, I've never been talking about it. Everyone's been kind of checking back on this intermittently to see if you can kind of jump from there was 1,000 options to 30,000 options. So in terms of the options for AI systems, generative AI systems, they're just cumulatively growing. So even from when I'm saying this to when someone listens to it, it will have grown again. So specifically in relation to the kind of academic side, you've got like Scrite, which is one of the criticisms of ChatGPT initially was it was kind of hallucinating fake references. Scrite will actually go off and kind of source material and give you correct ones. So you're going to have a lot of these kind of small, specific cases that you'll either plug into the bigger models or that you'll use in kind of a multimodal fashion. So there's a huge number of these tools and they're going to increasingly get better and better. And it's just something we have to be aware of. You don't have to keep your ear to the ground and know about each and every one as they emerge, but you just have to be aware of the capacity of these systems is going to increase rapidly and already is. And I suppose like anything in life, you get what you pay for. And if you pay a subscription to certain services, you get better quality of outputs, right? I mean, my kind of limited interaction with ChatGPT, I think is a 3.5, the free version I've been using. Certainly some of the output there at times is disappointing. And I suspect it's better if one were to pay the subscription. Let's talk about maybe some of the things that people need to be aware of if they were to go and use a generative AI tool. So you hear about things like bias. You mentioned about hallucinations. If we can maybe try and package some of those up so people know what they're getting involved with there. I suppose that'd be kind of a peculiar take on both of those. So I'm not sure of the bias in the sense that that presumes extra biaseal sort of a scenario, which is kind of problematic. And then even the hallucinations is potentially problematic in the sense that people talk about hallucinations as if it's kind of a malfunction, whereas in actual fact, the hallucination is in the eye of the beholder. And to a certain extent, the bias is as well. So without getting into kind of the philosophical of AI, there is a sense in which the content that these systems will produce and they will always produce on request may not be fit for purpose. I think that's the concrete bit that everyone can agree on is that it's kind of controversial. What constitutes hallucination? What doesn't bias massively controversial, even down to the whole Gemini controversy before they've even changed the name of it now in that sense from Google. But you have to remain cognizant of the fact that these systems will always produce an output. And the reliability, and depending on the domain, depending on the discipline, the expectation of how reliable they will be, these things have to be reexamined. And you can't just totally outsource to these machines by any stretch of the imagination. But that said, like we said a minute ago, the systems are getting better and better and better. So that's the kind of balancing act. And I think that's where the tension comes from. Everyone knows these systems are brilliantly productive, certainly in comparison to what you would have had two years ago. But there is still that doubt as to the reliability, the suitability, or even the function of the output of these systems. So that's, I suppose, kind of the broad debate that we're all trying to wrestle with. And I think maybe, certainly, I'm a millennial generation. We're very used to kind of maybe using things like Google and being able to kind of critically appraise the outputs maybe we're getting from Google, whether it's from a public and reliable publisher or whether it's a forum or Reddit. So we nearly have a hierarchy in our heads. But this is slightly different again in that, I suppose, you're not sure what the input or the programming input is in terms of judging the output. The source is nearly anonymous. And it's more difficult to appraise because of that. Is that kind of a fair assumption? I'm still trying to get my head around maybe how it is so different to Google and critically analyzing an output from it. But we don't nearly have the hierarchy in built in our head. Is that kind of it? I think so by virtue of the fact that it was in one sense, to some extent, it was already beginning to happen with Google. There was those kind of claims four or five years ago about Too Big to Know was a popular book whose author's name escapes me now, but we can put it in the notes later. And that was the idea that there was such a massive plethora of information available online that, you know, take for instance, from an academic integrity perspective, a student could go out and he could pull references from left, right and center and put them together and it could look like a very reasonable essay. But even someone who is an expert in the field might not necessarily be familiar with some of the references. So that was already beginning to emerge with Google, with the traditional, what seems like Stone Age technology now. And I think what's changed with ChatGPT is just the ease of creating that artifact. That's now instantaneous. So there was a worry that someone might find unusual or esoteric sort of references to back up something peculiar that they're saying within an essay. That was there with Google. Now it's a case of, depending on how much care a student has put into their particular assignment, if any, this might be just something that's rapidly emerged. It reads very well. It can read convincingly. As I said, this is part of the problem with hallucination and the bias. A lot of it will read very, very well. A lot of it, from a grammatical perspective, it will be quite good. There won't be any spelling errors unless they deliberately put them in. But it will read quite well. It will read convincingly up to a point. And if it's an area, say, for instance, if you're teaching a module, unless you might be an expert in an aspect of that particular discipline, but if this is kind of an introductory one and you're lecturing on, say, a certain section or you're introducing first years to a certain theory, you might not even necessarily notice that something is going to miss. And I think that's where the worry comes from. So I think that's the big difference there. But equally, the other difference is the inability of us to detect. Even with the two people to know, we were able to. Now detection is something that just isn't a goer. And there is the element as well, as you were saying, so it is crucial, the prompt writing when we're using such tools. Whereas with Google, where maybe we are inputting some keywords, excuse me, there, it's a different kind of approach. We wouldn't need such a structured and well-thought writing. A programmatic inputting. Exactly. Whereas for prompt writing, it's something that we need to exercise when using these tools. Let's think about, we've talked in general terms now, but for us at the University of Limerick about our approach to AI, things that staff and students need to bear in mind. I suppose at the very outset, we have mentioned there are quite a few generative AI tools out there. They have their strengths and limitations. At the University of Limerick, we endorse or stand over these, where appropriate, of Microsoft Copilot. And that is because the digital governance policies that are in place within the university. And so ITD have authorized this. That is because we subscribe to the Microsoft package of tools that we have in the university. If you are a staff or a student member listening to this, who have been authorized or are thinking about integrating generative AI into parts of your work, it's MS Copilot or nothing from our viewpoint. One should take great caution about inputting data or material into any AI platform. But certainly, outside of that, one could run in trouble with the digital governance systems in the university. I suppose just to put that out there firstly. Sylvia, I might come to you now and thinking about, as a staff member, if we think about maybe an academic who's gearing up for the next academic year and is really trying to get to grips about where can I get information? What should I do? What can or can't I do? A lot of maybe just not sure where to start. How would you go about guiding that kind of individual? Yes. So basically, all the information that staff and students can avail of are on our website. So we do have our website really offer a broad range of resources on gen AI. We are going to put the link, of course, in the episode description. But there is a specific section created on gen AI that outlines the general information about gen AI, what it is, gen AI and academia, prompt writing, good practices on gen AI and teaching and assessment. And then there are actually specific sections on the same gen AI page where users can find information on gen AI and assessment, academic misconduct and gen AI, tips for students on the use of gen AI, templates for students, and the gen AI principles for staff, students and researchers. So it is very important to say that all the material that we have put together on the website, there is a variety of formats. So users can find videos, can find presentations, can find podcasts as this one, interviews and references to European and international guidelines. So it's a very rich set of resources, as well as we were seeing templates for students to download and to use, checklists. So it's very, very important to highlight that. And also, I would say that, especially when it comes to the integration of gen AI into teaching practice and assessment, we acknowledge the fact that the diversity of the disciplines within the university. And of course, because of that, it is important to emphasize that each discipline will seek to integrate gen AI in a different way. But we do have sort of tips, you know, for staff when it comes to gen AI and the integration, the use of it. First of all, provide clear instructions to students about the use of gen AI. Secondly, it is quite important, again, to refer students to the library guidance on how to acknowledge the use of gen AI. Also, it is very important to ensure that staff communicate to students that, as you said, co-pilot is actually the recommended gen AI tool in UL. Also, of course, important to seek to upskill in gen AI through the university support. And also, consider mapping to the five step assessment framework that we are going to talk about this. And also, and finally, I think we have to consider that while we may see the advent of gen AI as something a little bit disruptive, I think we have to consider also that it is a great, great opportunity to reflect about teaching practices, assessment practices. So it is indeed an opportunity. Yeah, it's an opportunity to be dynamic. I'm just thinking, it made me think about my own healthcare background. And every now and then you get a new drug or a new way of approaching a procedure. And of course, we could stick with the old way, but now we use something new and quite useful. And we have to learn it and we have to upskill and move with the times. And it's for, ultimately, in my case, the benefit of patients. But it feels a bit like that, that this is, yeah, it's scary, but let's break it down a little and move with this. And I suppose, Sylvia, thank you for outlining all those resources that we have up there. And I think we find ourselves as a team, there are three of us, and we spend quite a bit of our time immersing ourselves in the information and resources that are out there. And we've really tried to consolidate, to quality appraise those tools and quality appraise that information, consolidate that for our own university. So it can be tempting to Google what other places have done, but we've really tried to do that initial hard work up front. So please check out our website first. And if you feel there is something missing or you have some guidance that you need, do reach out to us and we will do the hard work there for you in trying to put some shape on that, or we'll know who to ask about in the university as well. It might be ITD, it might be a library, for example, for some assistance. So you mentioned the assessment framework, the five-step model, and we do have another podcast episode on that. But just, this is an example of some work we've done as a team to look at what resources are out there and present those in a user-friendly format for our own staff members. So Fiona and Sylvia, if you want to maybe give us a brief overview on that one, because as I said, we've spoken about it before. We've, yeah, we've devoted the podcast to it before the assessment scale, which is kind of our means of recognising that, as we said, detection isn't the way to go. So incorporating AI into assessment is the best means of progressing. And what this is, is an attempt to allow AI to be incorporated in a way that means that the outlines of the assessment can be clearly communicated to the students, the expectations that you have as the lecturer as to how much AI is going to be used is established, the expectations the student has as to what's permitted is established. And as I said, we've done a full AI on that, the kind of the traffic light model of your no AI, some AI, which we split into three sections, and then green for full AI. And case studies. And case studies as well, yeah. And what we also have is we'll have a bright space course that will be discoverable for staff. Now, it's not going to be hugely heavy, and there's kind of some information on academic integrity. Generally, there'll be some kind of videos focusing on students' attitudes and opinions, but there'll also be a generative AI section. And what that will have is it will have kind of a brief explainer of the UL assessment scale. It'll also have the UL reflective tool, partially based on Setu's AI Mace, which is just a series of questions that will produce a template. So it will allow you to kind of just, it'll allow you to clearly outline in your own thinking how and why you want to incorporate AI into your assessment, and where, and if, I suppose. And then there's also one, as I said, on the assessment scale, which will allow you to just clarify as to what scale your particular assessment fits into. So they're, I think, quite worth having a look at. They'll only be short, and they'll be accompanied by more of the work that we've talked about as well, such as the kind of the fundamental AI values that UL are adopting. So as I said, it's tricky in the sense that it is absolutely massively disruptive. No one's denying that. It is going to be hugely problematic. No one's denying that. But what we're trying to do is build up a library of resources that aim to make the transition to AI-incorporated or AI-enhanced learning smoother. And to be honest, I think that's all we can do. There is no silver bullet to this. This is new. There's the question of whether it can be used in assessment and feedback still to be discussed at a later date. There's that how best to design the assessment. We've got CTL and people working on that. So these are open questions. What we're trying to do is just facilitate this happening in a cohesive way as possible. So just then, I mean, a lot of the points that we've made there would be of interest to students. We're just thinking specifically if you're a UL student listening to this, what are some of the key points you need to take out of what we've discussed here today? Or is there anything else we need to mention for students? I guess first things first is check with your module lead about what's permitted around assessment. Because I suppose thinking about the academic integrity policy and misconduct procedures that we will have published in 2025, if one uses generative AI in an unauthorized way, that will lead you to a difficulty with academic integrity and potentially having some sanctions or penalties against you. So it is an academic integrity matter at the end of the day if one were to use it inappropriately in terms of assessment. And it can potentially be serious. So if in doubt, check and then also follow the guidelines that have been set out for you. We mentioned about Microsoft Copilot and that is the authorized tool for the University of Limerick at the moment. What else is there that we can signpost students to or things they need to be aware of? Well, the policies and procedures when they're published will be there. But I think the assessment scale will be hugely useful for the students. And as you said, what should they do? When we're talking about it being an academic integrity issue or a misconduct issue, it's about the fact that rules of a specific assignment have been breached. And the rules of those specific assignments will, we hope, roughly correlate with one of the assessment scales. So something that the actual students can do is if they're given an assignment and they don't feel there's adequate guard railing around it or adequate explanation as to how much AI use, if any, is legitimate, ask your lecturer. Do you know what I mean? So the hope would be that your lecturer will detail how and if and how much AI can be used in an assignment. But if they don't, ask them. Be explicit. I think that's one of the major ones. We've had discussions with students and we know students are using AI the whole time. And to be honest, it's not even that, potentially not even that problematic. We just want to make sure that this is visible. And they're protected. Correct. Absolutely. Yeah. And I would say as well, we have created a framework when it comes to Gen-AI and the principles for Gen-AI. So when we're talking about Gen-AI, literacy, integrity, innovation, equity, ethical and secure use. So for students as well, be familiar with these five principles. Because I mean, they are divided, each principle is divided in three sub-headings. So one that relates to researchers and staff, one to students and one to institutions. Exactly. So it is very practical as well. It's a very practical way to say, okay, am I following these? Am I in line with what the framework presents and suggests? And it's there to protect them. Exactly. So it is really something that covers you in a way and you feel reassured to have those sorts of resources available. I mean, my observation on all of this is, I'm quite a while out of undergraduate education and I can remember the pressure of assessment and having multiple assessments at the same time and high stakes assessment and you're just desperate to get through. And so I can understand the temptation of utilising Gen-AI in a way that's perhaps unauthorised. There is obviously the potential breaching academic integrity rules, but also I would really encourage you to fast forward 20 years time when you're sitting in your office as a solicitor or you're a teacher or you're a doctor or whatever it is. And you need to know certain information and we're trying to equip you with the skills, knowledge and behaviours to make you a competent and an expert professional within your field. And I know it can be really difficult to see the wood from the trees in the middle of assessment, but if you try and take a step back and think about the purpose of education, the purpose of assessment is not about right now, it's about the future and to try and hold on to that. And that might focus your thinking a little bit around the subject. Do you have anything else to say about guidance? I think on the other side of that, because that's absolutely true from the student's perspective, that is something to keep in mind, but also from the staff in the sense that what you're teaching, if you fast forward 20 years into what your student will need, a lot of the technical skill sets may have disappeared. A lot of the actual, the specialised knowledge or the specialised skills that they may have developed, I mean, people have talked about in terms of computer programming, just kind of went from five years to two years was kind of the half-life of the sets of skills. And that's increasingly going to happen in a whole series of others. But as you were saying, there is certain things you need to know and that's kind of an appreciation and understanding of the discipline itself. You know what I mean? So if you're a pharmacist, some of the drugs you may be able to, there may be, for argument's sake, there might be a new machine that can make everything much, much quicker than the machine that you learned on. But there's still the fundamentals of pharmacy and law and accountancy, engineering, science, arts, history, whatever. So I think from the staff's perspective, they're getting worried about, and rightfully so, the students are worried about kind of potentially what the ramifications are for employment, what the skills of the next five to 10 years are going to look like. But I think in actual fact, as it currently stands, that's an open question that we have to wrestle with. All we can be sure of is the theoretical, the foundations of each of the disciplines that are involved in the university, their value isn't going anywhere, right? So when we talk about kind of reimagining education, it sounds like a big radical project. In actual fact, it's not. It's not that fundamental. No, no, the fundamentals haven't changed at all. The practice may shift a little bit, how you deliver things, how you assess may change a little bit. But those fundamentals, what you're teaching to those students, the core of what you're doing remains the same. You know what I mean? So this is something, we don't want people running terrified of this. This is just something we're going to have to wrestle with. As we said, there is no silver bullet, but what you're doing still holds value, it has to. Professional identity doesn't shift. The tools of the profession might shift, but not the identity itself. Okay, I think we've covered quite a lot of ground there. Thank you, Fionn and Sylvia, for your contributions. I hope you have found this episode helpful and hopefully you will join us for another episode of Give Me About 10 Minutes On. I think 10 Minutes On is a loose one for this particular episode, but thank you very much. Thanks, Marion.

Listen Next

Other Creators