
Nothing to say, yet
Listen to BS in MED podcast by Nadiya Destiny Myers MP3 song. BS in MED podcast song from Nadiya Destiny Myers is available on Audio.com. The duration of song is 15:47. This high-quality MP3 track has 65.096 kbps bitrate and was uploaded on 5 Oct 2025. Stream and download BS in MED podcast by Nadiya Destiny Myers for free on Audio.com – your ultimate destination for MP3 music.
Comment
Loading comments...
The podcast "Behind the Scrubs" discusses the role of artificial intelligence (AI) in medicine. Host Nadia interviews friends Natalia, Maria, and Leslie, who work in healthcare. They debate AI's potential benefits and limitations in healthcare settings. While AI can process data quickly and aid in tasks like CPR, the guests emphasize the importance of human connection, empathy, and critical thinking in medical practice. They agree that AI should be seen as an assistant rather than a replacement for human healthcare providers. AI could help with administrative tasks but should be supervised by humans to prevent errors. The guests highlight the need for a balance between AI assistance and human oversight in healthcare. Welcome to Behind the Scrubs, Stories, Science, and the Next Generation of Medicine. Will AI replace human physicians in the future of medicine? Could a computer help your doctor save your life? That's the question at the heart of AI medicine. I am Nadia Myers, a pre-med undergrad, founding host of Behind the Scrubs, a.k.a. BS in Med. And in this podcast episode, we'll explore the amazing ways artificial intelligence is changing the world of healthcare and what it means for the future of medicine. As this is my first ever podcast, I have two special guests joining me who are both in healthcare, my best friends Natalia and Maria, and Leslie Ortiz. Go ahead, introduce yourself, queens. Hey, everyone. As you heard from Nadia, my name is Natalia, Talia for short. I'm currently a phlebotomist and pre-PA undergrad. Hey, I'm Leslie, and I'm an EMT and a current pre-med undergrad who loves research. Yes, thank you guys for coming and being the first guest in the season. Now, I want to jump in and ask for your guys' ideas on AI medicine. What about you, Leslie? What do you think? You go. Well, I have been an EMT since my senior year of high school, but instead, I started when I got my certification. So, I have definitely seen many, many interesting things. However, in my line of work, it is very fast-paced, complex, and even chaotic. So, there is not much time to depend on machines to get a diagnosis. I believe AI can be used as a great tool, but not for every specialty, especially in cases like mine. That is actually pretty interesting because I know you've had some traumatic experiences and some weird ones. However, I am glad you brought up the point that AI should not be dependent on certain specialties, or at least in some complex, fast cases. This reminds me of a peer-reviewed article that I read by Mehet Gun, and he is actually an emergency physician in Istanbul. He actually conducted an experiment with using AI models like tragedy T to diagnose patients, and that's crazy. Tragedy T and AI, that's what? He proved that AI shows promise but needs provision from human physicians, and because it does not work well in complex scenarios or problems, honestly, the majority of medical issues for patients are pretty complex, and it needs careful diagnosing and accurate diagnosing. So, do you guys think that there can be any specific job that AI can help you with, especially as an EMT, or are you at a hard no? I think it really depends on where AI can be used. You know how there's a machine that can give CPR? Yes. That's amazing because CPR is tiring, and God forbid I have to do that for 30 minutes. Yes. However, when it comes to something that's so fast-paced, there's not much time to really input AI into such an area because you have to be on your feet. I don't exactly have time to type something in and be like, okay, now I can do it. Exactly. Yeah, well, thank you for sharing, Leslie. But those are very valid points because, I don't know, they seem cool. They seem really exciting technologies, but at the same time, you can't really trust AI, you know, or even robots doing those type of things. But what about you, Paulia? I know you just started your new job as a hallucinator. Yes. Woo. But you already have gotten many hours of experience so far. You've been working, girl. So how do you feel about AI in medicine, and how do you feel about what Leslie said? I think it's so true that there's some things that can be better done with AI, like the CPR, but also, for example, in my job, when I'm sticking the needle in and, like, let's say I need to fix the needle and just fish a little bit, I feel like AI can't do that. Exactly. Even, like, little robots, like, how are they going to know exactly? How are they going to see the patient, see their reaction to see, ooh, I went a little bit too far? Like, I think that that's something more human-like. And I feel like when I talk to my patients, it's just so much better to have conversation with them in general. Yeah, I agree because I know specifically in your job, they actually have, like, some advances with AI machines. I know they do blood draw process, and they do, like, they easily give vein visualizations. I do the vein visualizations, but it's a pretty good thing. Like, at least the health bottom is because sometimes it's really hard to find veins. But you're right, the blood draw, I don't think I can trust a machine doing my blood drawing because who knows? Even though they might feel a difference in pressure and everything, I still trust, I don't know, if you've had that experience, you know? But, however, meeting the human-to-human connection and communication is big. Being quick on your feet, if incidents happen, or even just human contact is just a big part of being a phlebotomist. And that is the same for physicians and other healthcare professionals. And I feel like that's one of the big reasons why patients keep going to the hospital even though some of them might have that fear. So, imagine having a fear of hospitals but having to go to a roll-up or things. It's not really exciting. I actually wanted to let you guys know if it's not important at all. Let's pull up some research that was, again, that actually blew my mind. But, for an example, in a 2019 article by Ann Iluja, the author points out that AI can process massive amounts of medical data way faster than any doctor could. And that means it could flag rare conditions or capture patterns, and physicians might miss out in the sea of just patient charts. So that's pretty powerful, especially when you're talking about early diagnosis of things like cancer. But then, on the flip side, Matthew Baker, in a 2020 article, reminds us that patients often don't want a computer device for their treatment. And that's valid. There's just that trust in doctor-patient relationships that a screen just cannot replace. And honestly, I feel that. Like, if I was in a hospital, I wouldn't want my doctor to just say, yeah, the computer told me this, so we're going to go with it. Like, that doesn't work. So I just want them to explain and reassure me and actually be human about it, you know? What do you guys feel about that? Like, about a doctor saying, yeah, the computer told me about this, so I think we should go this way, or, I don't know. No, I totally agree with that, especially in a field like, just because emergency medicine is where I have the most experience in, I've worked with children a lot, and it's like, they need that human interaction, they need that emotion, and a computer can't give that. In addition, especially with physicians, like, you want to be able to trust them, that they know what they're doing, that they're knowledgeable. You don't want to feel like your doctor is, like, even if it's true, you don't want to feel like your doctor is like, let me leave the room with this thing at Google. But that's not what you want, and, like, you want connection, and you can't get connection with the computer. Exactly. And, like, for example, on the ambulance, a lot of times you'll get a call from, it's not even an emergency, but, well, people will call an ambulance just because they need company. Yeah. And what the computer can do. Yeah. Like, you can't. Exactly, and let me please devil's advocate for just one second, and I would say that I think it would be useful with looking at charts, but when it comes to the patient, like, actually being there in front of the patient, you should not be using the AI. I agree. Imagine, like, they say something like, I found this on Google, and you're like, actually, well, chat should be teaching us something else as a doctor. Imagine that. That's crazy. But I agree with that. I don't think I would like that very much. Can I have a new doctor, please? We go outside and we get another doctor who's on actual chat. Especially because AI right now doesn't have all the information it needs. Exactly. Like, if you ever use it for anything and I get it wrong, you're like, thanks. Yeah, and it's never always right, because, you know, we're in biochem, and our teacher, he kind of tells us to use, what is it, chat GPT, and when we look things up, when we put it in, it does not help us whatsoever, especially when we try to find answers with it. So I feel like that gives us a little preview why chat GPT should not be something we depend on. Right. Especially in a professional setting. Especially professional. A tool, not a replacement. Yeah. That's where I think AI should be seen. So what I'm kind of saying is that AI should be seen as an assistant, not the boss. So then that's more supervised in a way that can do the boring work, but doesn't really overtake the reason. So now let's also be fair. There are people who think AI should eventually take over certain tasks, like Tali was saying, from doctors. Some argue it might reduce human error, speed up care, and even lower costs, and that's agreeable. I know this other article I found by James in a 2023 article, he said that AI could actually take over a lot of the boring administration stuff that drains doctors, like writing notes, scheduling, or sorting insurance codes. So imagine doctors spending more time with patients instead of paperwork. So let me ask you two. If AI could completely handle paperwork, charting, or even repetitive tasks, would that actually help you in your future careers? I feel like if they do some organization and they have it ready for me to check, I feel like it would be okay, but I always think that humans should double check everything that they're doing. And yeah, I agree that it could help with some human error, but then there also has to be a human on the other side to check to see if everything is correct, because AI could also have an error. I agree with that. However, like for example, on the back of the truck, you have to write, I'm sure in the hospital, you have to write a patient care report for every single person you see. And if you input that into AI and it messes it up or something, that can be a liability for you. God forbid it goes to court, or is there a question about something that you wrote and they're like, oh, I didn't write that, it got changed, or whatever the case may be. It can really mess with your career and it can also mess with the patient in case something, files change like a vital sign or something. So I don't know. So would you guys say that you guys will write your report and then probably use AI actors who proofread as a way of checking your errors? Or would you have them like talk to it and then write the report for you, read over it, and then say, what do you guys think will be the best idea for you guys to use AI like that? Definitely not write it for me. Yeah, definitely not. I agree. And now that Leslie has me thinking about the reality of things, like what database is chat QBTR, like what AI would we, yeah, what would we use? Because if that data gets breached, that's so much information. Oh, it was a big one. Patient competition is a big thing. What if it gets hacked? Exactly. Oh, no. Yeah, what you're saying now, chat QBTR AI is kind of under the government. But, oh, I'm supposed to heal. I think it's over. But yeah, see, so that's the thing. Like there's a lot of AI that could do to make medicine better in specific ways. But there are, of course, our limits. So in Cabral, in his 2025 article, he even argued that AI should never be fully independent, which we all agreed on, and always need a doctor to sign off. So basically, AI is like a super smart intern sometimes, and it can suggest, calculate, and save time, but the doctor still has to be in charge. So here's my take on a solution, because now let's talk about a solution. So instead of AI replacing doctors, because of course they cannot, in my sense, hospitals should focus on integrating AI into systems without, well, with, sorry, strict physician supervision. That means AI helps doctors make decisions, diagnose the need of help, but doesn't make them alone. So they also should be used to help with human error, do charting and paperwork, and minimize the use of machines that cost so much money to help our patients, like MRIs, scans. Like maybe you can figure, instead of doing all those different tests, I forget the name, maybe there's a way AI can figure out how to do less tests for less money for our patients. Possibility, probably near future. But again, it will be used as a tool. So also, med schools should start teaching students like us not only how to use AI, well med school and PA school, start teaching students like us not how to use, not how to use AI, but how to use AI better, but also how to recognize when AI is wrong. Because this allows them to not depend on AI either. That way future doctors don't get lazy and just don't trust the machine blindly, well trust the machine blindly, because that can lead to serious consequences for both the doctor and the patient. Now, can you guys give me an idea of a solution that you guys think of? You guys can agree, you guys can disagree with me, or do you guys have another kind of sense of a solution? I agree. Like I was saying, I was devil's advocate. I think that it should be, there should be rules to it, and like you said, there should be training. If we're using a new tool in the field, then there should be adequate training. Like don't fully trust this, this is just a tool, this is not going to do your job, this is not going to replace you. And I do think that there's a good future with AI, and it can help the workforce and be more proficient and cheaper. But yeah, I think that there are many downsides to AI, so I think that we should do more research and yeah, just get fully trained fast before thinking about adding it. I agree. I'm very skeptical when it comes to AI, just because I definitely think there has to be like years of trials, many articles published, a lot of training, because at the end of the day, we shouldn't be dependent on it. It should be like a tool, not as a replacement. And so it should be embedded slowly rather than all at once. I don't know. I'm very skeptical when it comes to AI. I know. Seeing through your faces, you're like, ahhh. I know. But who knows? We live in... I feel old, though. I know. We're aging very quickly, so I don't think there's another option. It's going to happen eventually. So we just have to evolve it around something that's not a negative impact, but a positive impact for our lives. Exactly. That way, patients still get the human side of medicine, but doctors also get that extra support, AI, when it's helpful. But I just want to say, thank you guys so much. Well, that's a wrap for this episode of Behind the Scrubs. Thank you all for listening, and thank you to my beautiful guest speakers, Tali and Leslie. Until next time, keep your pulse on the future of medicine. Bye! Adios! Bye!
There are no comments yet.
Be the first! Share your thoughts.