Home Page
cover of mEDucation S1E3 anna and christine
mEDucation S1E3 anna and christine

mEDucation S1E3 anna and christine

MED

0 followers

00:00-36:34

Nothing to say, yet

Podcastspeechspeech synthesizernarrationmonologuefemale speech
2
Plays
0
Shares

Audio hosting, extended storage and much more

AI Mastering

Transcription

Christine Rivers and Anna Holland discuss the use of AI in business school education. They talk about their professional backgrounds and how they became interested in AI. They also discuss the prevailing sentiment in the UK regarding AI's integration into the classroom and the barriers to adoption. They highlight the importance of considering institutional, academic, and student perspectives, as well as dimensions such as academic integrity, learning and teaching, and employability and skills. They emphasize the complexity of managing AI integration and the need for a nuanced approach. In this episode, I'm thrilled to be joined by Christine Rivers and Anna Holland, and they speak to me today about the use of AI in business school education. Christine is a Professor of Mindfulness and Management Education and is the Co-Head of Department for the Management Education at Surrey Business School. Anna is also at Surrey Business School and is an Associate Professor in Management and Director of Learning and Teaching. Hi all and welcome to Meducation Unlocked, the podcast series. I'm your host, Eimear Nolan. Hi Christine and Anna. It's lovely to have you both join me today on the podcast. And could you give the listeners an overview of your professional background to date? Let's start with you, Anna. Thank you. It's an absolute pleasure to be here. So prior to joining the University of Surrey, I worked in the further education sector for over 15 years, mainly focusing on apprenticeships and within that, looking at the design and development of curriculum, student support, and quality and compliance. And for the past five years, I've been working in higher education here at the University of Surrey. Well, that's really interesting. The shift also is very interesting. How about yourself, Christine? Would you be able just to give us a brief overview? Yes, of course. Thank you. So my journey has been a little bit different to Anna's. I'm originally from Austria and I worked in marketing there and publishing before coming to the UK to start my PhD in 2006 on human-computer interaction. What was really interesting is that it was a PhD scholarship that was funded by TELUS Research UK and I was asked to look into remote collaboration. So over the last three years, I spent a lot of time trying to understand the impact of technology on human behaviour, particularly on decision-making and how that sort of manifests itself in our day-to-day in organisational context. And so from there, I then started to do some associate lecturing in sociology about advertising and marketing, going back to my original background, and then worked there as a researcher looking into qualitative analysis and particularly how we can analyse this sort of setting, what we're doing here, how we can understand that. That was completely new at the time. And then I joined the business school in 2011, I think, as a lecturer in marketing, which was really exciting for me. And very quickly, I developed an interest in management education and dropped a little bit the technology. I always came back to it and I was still interested from a consumer behaviour perspective, how technology shapes us. But at the time, everything was a little bit in its infancy around that. And then for marketing, I moved into management education and then very quickly, it was clear that I have a little bit of a finger for the various director roles. I'm the brightest director, both for a lecturer and the director all the way then to becoming the architect of learning and teaching. And so, yeah, I've done my whole career, my academic career, actually at Surrey. And then in the last sort of three, four years, I added to the management education, my interest in mindfulness. And so now, this is how it all comes together. And now I'm with Mindfulness AI and the management education is sort of my big area. That's great. And that is a really good rundown on absolutely everything. And you can completely see how you transitioned, I guess, to come back now to technology in education, especially with the advancements in AI. Anna, how did you start to pivot yourself towards AI? What sparked your interest in the field? So I am taking a slightly different route into academia. So I completed my undergraduate degree 10 years ago. So I have a career prior to actually tuning in to the benefits and the excitement of academia. But I've always been interested in student behavior in learning. And I did my undergraduate degree in education studies. And I very much looked at readiness to learn in adult students. And I'd like to kind of challenge the perceptions of pedagogy, recognizing that there's a spectrum of preferences, but that spectrum isn't fixed for any one person. I believe that it's fluid and that you can predicate along this spectrum during your time as a student. So I kind of evolved, I think, my thinking from looking at readiness to learn. Then working with Christine, when COVID hit, we created a design, an approach to hybrid learning called active digital design that's gained some really good traction. It's now a business as usual here in Surrey Business School. But we know that other schools have also adopted it. And so when Gen AI emerged, it really was the next progression into how is this impacting the way that our students learn today. And for me, when I was looking at the literature, I found that there was under-representation of student voice in this area. And so that's really where my research over the past year has really started to focus on. Yeah, and that's great. And I'm going to stop you right there because I do want to ask you to delve into more about the student voice. But you hit on something just before that, that I want to explore a little bit more from both of you. And, you know, as academics in the UK, what do you think the prevailing sentiment is here regarding AI's integration into the classroom? I mean, from a professor viewpoint, for example, do you see professors embracing it? And if so, how are they doing that? And if not, why? What are the barriers? What's stopping people from doing this? Yeah, I'm happy to get started on this, if that's OK. And then Anna can dive deeper into the practical area. So I look at it from an overview and sector perspective, maybe, because I think there are also external factors that we need to take into account first to really understand the picture of what's happening in the UK and maybe similar scenarios happening outside in the world elsewhere. Well, first of all, technology has always been a bit of a double-edged sword, right? Because we look at it in the literature and we look at it from a technology acceptance model perspective, you know? And we always see two camps. The ones that embrace it really have to trust it about technology. And the other is really a little bit defensive, not sure, you know, sceptic about it. And that is a really interesting way to look at it, why that is actually happening. And I'll come to that later on in our conversation, you know, what are sort of the underlying principles of that, why we have this sort of notion around it. But within the higher education sector, what I would say is and what I have seen over the last sort of really three years since COVID, when we had to move very quickly and abruptly from sort of more in-person delivery into the online space, and Anna already alluded to active digital design, which was built on my research and particularly around the usability of it, you know, how people interact, how technology affects our behavior and so on. So why is this important? Well, we have to look at it from a couple of different areas. The first one is that we believe there are three perspectives that people need to take into account when we think about how we integrate it. The first perspective here is at institutional level, you know, what is an institution actually doing and do they want to embrace it? What is their guidance and their approach around that? The second perspective is at the individual academic. What is their notion? Are they in the embrace camp or are they in the skepticism camp, if you see? And then there is the third perspective, which is the students as well. How are they feeling? And I think there is a misperception sometimes that students nowadays, because they're always on their phones, that they will just be very happy to embrace any technology. But that, you know, and Anna can speak to that a lot more than I do because she has done great research in that area. But I think to bring those three perspectives together really changes the way of how we look at it, but also already emphasizes the tensions that it can cause in the conversations and discourse within the higher education sector. And then alongside those three perspectives, there is three dimensions that come into this discourse. So we often talk about academic integrity or the bigger term is around quality assurance. How can we make sure that what we are assessing is original and novel and their own work and ethical as well? Then the second dimension is around the learning and teaching and the assessment side of things. So how are we bringing it in? How are we introducing it? Are we not introducing it or are we introducing it? Are we using it as a learning and teaching tool with our students or are we leaving them to their own devices to do that? And then the third dimension is the employability and skills. As we are bringing it into learning and teaching or not, are we preventing them to develop particular skills that they may need later on in a working environment or are we fostering that? Yes. Yes. So now you can see these three perspectives coming together. So the institution, the individual academic and the student, they all have preferences for that and they will come from different angles. And that will then guide them if you want what happens at quality assurance level, learning and teaching level and then how it's taken forward to an employability and skills dimension as well. And so it actually becomes a really complex construct that does require a complex mindset to manage it. And what that means is that there isn't really a solution to this at the moment. It's more about trying to take it one by one. Let's look at that perspective and particular dimensions. Let's look at that perspective and particular dimensions of that and how we can go about it. So this is why when universities have guidance, it's often very ambiguous and not clear. There were some universities and I think you found an example, Anna, where they're very black and white. Okay. But we do know that a one-size-fits-all does not exist because of those different perspectives and dimensions that I've just alluded to. So the issue is that there are also risks with technology that are not addressed. And I think it's a bit similar to the internet regulation in the beginning. Nobody was really sure where it was going. And it opened those floodgates for cybercrime. And now we are with AI, which is a level above that. And to some extent, I think, you know, our indecisiveness and ambiguity that we have in higher education because we don't know and we don't like not knowing because it's unpleasant to use, prohibits us to actually come up with something that helps us. So from my perspective, AI is a bigger fish to tackle in the ocean than the internet was. But we do need to look at it from that interrelationship and interconnectedness and dynamic of those three perspectives and dimensions. So I think that's everything that I would say here. And then over to Anna, who can go into more granular for all these different perspectives and dimensions for the research. That's really interesting. And Anna, would you mind giving an overview of what you see as, I guess, the prevailing sentiment regarding AI in the classroom at the moment? Oh, absolutely. I think, as with anything new, we're seeing mixed reactions and appetites amongst academics. And broadly, at institutional level, we seem to be talking around using AI to reduce the reliance on academic time and perhaps cleaning up academic tasks, sorry, administrative tasks. And I think there's an appreciation that this is a complex area that impacts not only academic time, but also assessment, learning, teaching, student support. And all of them have separate opportunities and challenges that are inevitably interlinked. And interestingly, I was reading a report by Erlis Jell, which was released late last year, 2023, that predicted that in 2024, we'll see a shift with AI from a state of excitement to deployment. And I think that is very true of what we're currently seeing in the classroom now. So if we think about learning and teaching, we're seeing different speeds of acceptance and different speeds of adopting practices. Now, if we think back to this time last year, and there was a real sense of anticipation about the impact of Gemiini, the threat of epidemiogenicity, is it a revolutionary tool that can change the essence of business and therefore impact our teaching and learning? And I think in reality, a year later, we've established it is both of those things. So with regard to the inclusion of AI in the curriculum, we're seeing this more at the moment happening at individual module level. And through exploring the benefits and challenges of Gen-AI, we're able to integrate, for example, ChatGPT and similar platforms. A couple of examples of how we're using it here in the business school. So we have a new module being introduced next year for all of our second year business management undergraduate students. And we'll be assessing the use of prompt writing to produce social media and website marketing copy. And the reason we've chosen that is that looking at reports in industry, one from Forbes, for example, suggests that 44% of marketeers are using Gen-AI for this purpose. So we're reflecting what's happening in the workplace. A couple of my colleagues are using Gen-AI as a way of developing critical thinking. So critically evaluating the output that a student would get from Gen-AI. And I think that's useful on two fronts. One, it helps students to consider criticality and critical thinking, which is a vital skill. But the other side of it is it also helps us to open up that conversation about ethical and responsible use. So it's definitely coming in. And we're definitely moving into this deployment phase that was suggested. However, I do think as a business school, we need to be mindful that we have specialisms in multiple disciplines. And integration of AI should be proportionate to that discipline. And not yet offering degrees in artificial intelligence in business. So fundamentally, if we're delivering business management degrees, we need to be cognizant of our responsibility to support students to be well-rounded, resilient, and skilled individuals. We've seen what's happened the past year. So it's inevitable that in their career, we will see new technology disruption and wider challenges. And Gen-AI is just kind of the tip of the iceberg, really. So for our students, I think we also need to make sure that they're ready to be resilient, to be critical thinkers, ethical decision makers, and fundamentally have this curiosity to learn and discover. And that will set them in good stead for whatever is to come in their career. So now you're in the firing line for a lot more questions. There's two things you said there that I wanted to delve into more. First, I want to start with your research. If you can explain around the students' perception of AI. Because to be honest, and I spoke to you already about this, we see a lot of academics' perspective, where we really are missing a student voice of what they see it as. And I was really excited when you were telling me that you're actually looking at that because it's something that I'm really interested in looking at. Because kind of like what Christine said, we have a perception, oh, sure, everything in their life is around technology. Of course, they're going to want to do this. But your findings are actually really interesting in relation to this. Can you explain a little bit more? And then I will get on to my second question to you. Sure. So I interviewed a big number of students last summer, particularly the interview students who had just graduated, because I was keen that they would be able to speak to me openly and transparently about their use of generative AI. And I didn't want to have any conflict in what they were telling me. I literally did the interviews probably two or three days after their graduation ceremony. Because then we wouldn't be taken back after them. Exactly. Exactly. And the reason that I was keen to speak to those students also was that CHAP GPT really came into effect for their final semester of their studies. OK. And so we spoke about many, many different things. But there were three key themes that came out of the study. The first was around immediacy. And in terms of immediacy, I think probably the key learning that came out of that was that our students were seeking validation and reassurance of their work, their ideas, their thoughts. And so because CHAP GP was available 24-7, they were seeking that reassurance from CHAP GPT. And I asked them, well, what would you have done before CHAP GPT? And they said, well, I would have gone and spoken to an academic. I would have spoken to my friends. And so what we started to see was this replacement of social interaction that maybe wasn't instant with this instant gratification of I can get an answer. And the sorts of things that they were looking for reassurance and validation on was around have I answered this question correctly? Have I, if this, if I put my essay into CHAP GPT and say, does this meet the requirements and first of a UK-based assessment? If not, what do I do better? And we had some really interesting conversations around this. I said to them, well, what would you do if CHAP GPT said, well, yes, that's a first, but then you submitted it and now you're, I didn't know if it's the one actually. It's not. Where would that, where would you go with that? And interestingly, they said, well, I'm going to trust CHAP GPT again. Oh, interesting. OK. However, I wouldn't be surprised if some students would come and challenge the academic judges. Yes. So you can see, you can see the fault lines there between, OK, well, it's passed that test and then the academic gives it a 2.1. You can, you can, you could see the potential challenging conversations that would take place as a result of that. Yes. Yeah. And I fully expect that that will happen, that I will have a student of mine or at some point to challenge in that respect. The second kind of key thing that came out was about equity. And I'll talk more about this in terms of equity of the degree itself later on. But primarily what was interesting was that students recognised that when you have a tool or a system that you have a free version and a paid for version and that not all students would be able to afford the paid for version, then how would that be equitable if you were being assessed on the use, the output of that tool? That is a great point. There was a real divide between the students. Some said, unless the university can provide everybody with the most up-to-date paid for version, then we shouldn't be expecting to use it in assessment because it's not fair. Equally, strong argument, but there's always been inequality. There's always been no students who could afford their own version of a textbook. There's always been no students who could afford private tutoring. So there was a real split. I do think as a university we probably need to consider that and to consider if we are something on chat. I do the same. In our assessment, how do we ensure there is some access to that? The final or the third main thing that came out was obviously around integrity. That is front and center of a lot of the discourse and certainly around the time I was doing the interviews. Fundamentally, and I found this really interesting, the students I spoke to said, we recognize it's an easy option to use chat GPT to cheat, but it's not so good that I would risk if I've never cheated before. It's not so good that instead of maybe start cheating. And I found that to be a really interesting insight because I think the assumption is in some of the conversations that because it's there, students will use it and they will use it inappropriately. But that wasn't the findings from the study that I did. You did a study, you said last year, right? I wonder, has that changed in the year? OK, so it's been around. People have time to test it. Is it constantly learning? It's constantly getting better. And also things that it's not good at, we're becoming very aware of. And while they're being fixed, we're aware that, OK, so I won't do it for references or whatever it may be. I'd love to know. You probably don't have the answer to this right now, but I'd love to know, has that sentiment changed slightly when they've become more familiar with us in general? Yeah, absolutely. And that is a key question. What we don't yet know is whether the students that I spoke to last summer were caught up in the novelty and hype of chat. Yeah, yeah. T-B-T-I-N-S. So we don't, I guess what we don't yet know is, is this, how much are students now using And has it become an integral part of their learning and studying? And fundamentally what came out of this was that, although it was very clear that they haven't cheated before, they're unlikely to cheat using chat QPT, there were blurred lines of what was acceptable use. So, for example, one student very comfortably said, look, I do a draft of my assignment. It's 500 words too long. I put it into chat QPT and say, could you cut it down to 1,000 words for me? And then I submit that 1,000 words. Okay. That was deemed acceptable by that student. Whereas other students said, I know people use it to edit their work. I don't think that's acceptable because fundamentally it's not the final product of their work that they're submitting. So there were all of these blurred lines. And what became so apparent and what the students were calling for was we really value open and transparent communication, conversation. We want to be part of making these guidelines and understanding that at the moment we're kind of just getting on with it. But we need this framework. We need some guidance to understand where that line is that we can't cross. Yeah. And it's super interesting because while there's a line, and I agree with them, there needs to be, I think everyone is in agreement and probably where there is no black and white is that everyone agrees there needs to be some form of guidance around this. And especially though, from a student perspective, and this kind of brings me on to my second question to you, Anna, is clearly an industry. Like you had said there, the students that you're going to open up this marketing module to next year, presumably some of them may want to work in marketing. And Gen AI is being used to some degree, if not a big degree, in certain industries at the moment. I mean, if you have students in health care, it's huge. It's been used a lot right now. So then we have to also reckon with, OK, we may not want them to fully use us, but our duty of care to a student is also to equip them with the knowledge and the skills and the abilities to go out into industry and be proficient in what they do. And if part of their industry is requiring them to be proficient in AI skills, we need to then start teaching them how to do that. So my question, I guess, is, you know, industries are more than likely going to start seeking students who have the knowledge and the skills set of AI, and they're going to start looking for those. And students that don't have them or are not allowed to use them will be straight away at the back end of the recruitment cycle if skills in this are highly looked for. So what implications, then, does this all have for students transitioning to the workforce if we say, OK, we're not using it? Are we then, as you had said before, then we're creating another divide of a disadvantage? So certain universities won't use us? Are they then creating a disadvantage to those students for the universities who do use us? Kind of similar to the paid and unpaid version, but on a larger scale. What are your thoughts around that? So I think that we need to recognise that we need students to be competent in the use of AI and Gen-AI as they progress into the workplace. It's something that is going to be integral to workplaces in the future and for their future careers. The extent to how we should integrate it into our programmes, though, I think is what is in question. So are we at the position where we recognise that, for example, ChatGT or any similar Gen-AI programme is another tool? Is it similar to, for example, Microsoft Word or Python? Is it another tool that we equip our students with the confidence to be able to use competently? Or does it become a primary driver in our outcomes for students? So I think there's a balance to be reached. I'm not sure as an industry yet that we are there. I think Christine can probably talk on this more. Thank you. I think it brings us very nicely to the question around intention. It's very easy that when technology arrives and comes into our lives in whatever format that we jump onto it and we use it for the sake of it. And as it develops, it shapes our behaviour in the same way that we as a society shape the technology and how it evolves. It's a reciprocal cycle in a way. So particularly with AI, and we have seen this with students who have been on placement, have entered the workforce where they weren't exposed to AI beforehand, they were asked to utilise it in a lot of different ways. So I've got a couple of placement students, for instance, who were asked to reduce the cost and increase their performance, particularly around translations, to use AI for translations and reduce the cost of actual human translators. But then they still have to go back. Others are asked to use it as a tool for recording minutes for meetings. What we can see here is that AI has an incentive the way it is used, and Anna has already alluded to this, and this is the instant gratification. It also addresses those painful processes that we don't want to do, but of course there is a danger to that. And the danger to that is that we lose our own human capabilities, and we cannot evolve them further because we're outsourcing them. So what are those human capabilities? Curiosity, critical thinking, creativity. And if we don't address them, if we don't evolve them further, then they will diminish. It's just how it is. And immediacy is that other one. We want it here, we want it now. So what's the currency that AI and most technologies are playing with? Attention. And that is really where we are vulnerable. And AI and other technologies start to then influence our thoughts and our decision-making, and it becomes this intermediary, if you want, between the mind and the outside world. And it is invasive and persuasive at the same time. Now, that sounds really negative, absolutely, but the question is how can we counterpart it? So this is where the only way that I see it is coming back to being really clear about what's my intention in using it, being conscious about that, and mindful in that. And I'll explain what I mean by this. Now, I really like Anna's example about the student who says, I've got five and or it's too many, let's do that. You know, you do it for me. Because there is an intention behind it, okay? And so when we talk about intentional, intentional is about the motivation behind an action. When we talk about conscious, it's the knowing and the experience of the mental states that are arising, that are happening. And then mindful is the ability and the practice of the present moment, awareness of the body, the mind, the feeling, and the mind object. So this is how we look at this. And if we can become present of this to say, okay, I have this essay, which is 1,500 words. I need 1,000 words. My intention is I want to short it. I have tried it myself, but I've hit a brick wall. Which tool can I use to help me with this? And then to compare it. Is it still giving me the same message that I wanted to get across? Or has it actually changed it? So you start to still be curious about how technology has shaped it. You're still using your critical thinking skills and you're being creative in the process itself. And I think that is what is really important. And it's not just about let's put it in there and then maybe we're not really through with trusting this. I really listened to a lovely podcast the other day by Radema Fernando, who looks into human technology. And he described it really well. He said technology is exceeding human limits. And the human limits are human capability and human vulnerability. And when that's happening, it's a threat. And so I think it is really about coming back to this intentional, conscious, and mindful perspectives that we have and ultimately utilize our own awareness intelligence. We talk about this a lot. Artificial intelligence and awareness intelligence, there is a marriage that we can go through the process to explore a lot further what this awareness intelligence means in the context of artificial intelligence. If we don't do that, we might change our behavior and society in a way that we don't want it to change. And we might sacrifice that sort of really valuable currency that we have, attention. Because ultimately, that is what we need to train to focus. And if we don't keep that sacred, if we want our attention to be present and utilize it in a way to influence our behavior, then AI and technology are probably going to take over in some ways. And that's the delay. And that's what we need to be careful about. So I think we can live in coexistence and in an equitable relationship. But we need to utilize our own intelligence to make that happen and to be intentional, conscious, and mindful in how we're using it. Anna, have you any final thoughts and inspirations to share with us before we finish up? I would like to share one final thought that came through loud and clear from the students I spoke to last year. And it ties in with what we've just been discussing around intentional use. There is a concern that the loudest voice shouting at the moment is perhaps concerns around academic integrity and that this could be casting doubt on future employers around the integrity and the degrees that students are presenting themselves with. And our students were concerned that it may devalue the degree. They equally recognized, however, that within a short amount of time within the workplace, you would probably identify someone who didn't have the well-rounded skills. And there would be a fair and safe mechanism, if you like. But I think we need to really shift our discourse in academia away from concerns around academic integrity and much more into this intentional use. And I think there's a real opportunity for us to do that now. And I think that will also pave the way for our students to be successful as they transition into their career. Yeah. And what a great sentence to end our podcast on. Thank you so much again, Anna and Christine, for your time today. I found it very enlightening, as I'm sure our listeners will, too. At the end of the podcast, I'll have a list of Anna and Christine's publications in relation to this, along with their contact information should any of the listeners like to get in touch with them. Thank you very, very much again and have a great day. Thank you. Thank you for listening to Meducation Unlocked, the podcast series. Make sure to follow us on Spotify.

Other Creators