Home Page
cover of podcast part 13 (redo)
podcast part 13 (redo)

podcast part 13 (redo)

00:00-04:43

Nothing to say, yet

Podcastspeechmale speechman speakingnarrationmonologue
0
Plays
0
Downloads
0
Shares

Transcription

The transcription discusses the usage of chat in education and its impact on students. It highlights that while chat can lead to cheating, the underlying issue is that students prioritize grades over knowledge. Standardized tests are easily exploited by chat cheating, so educators need to redesign tests to prevent AI use. Chat cheating teaches students how to produce quality text easily. However, relying too heavily on chat limits creative thinking and emotional depth in learning. Chat also raises ethical concerns about plagiarism and false information. It undermines the reliability of grades to measure understanding. The transcription questions why society unquestionably embraces chat as amazing and inevitable. While many students treat chat as a magical savior from monotony, we wanted to uncover if its usage is truly to the benefit or deficit of students. Clearly it can lead to issues in cheating, but the reason behind academic dishonesty is important. Students cheat because they value the grade over the knowledge. It doesn't help that standardized tests are designed in a way that makes it easy for students to use chat to cheat on them. If educators want to force students to learn deeply and evaluate in the most equitable way possible, they have to redesign their tests to prevent AI use. Or at the very least resist it. Exactly. The entire process of bubbling in a Scantron to answer questions that are identical to practice problems is rather robotic. And it doesn't help that chat is so tempting. Producing relatively good quality text using chat cheating is a great way to teach students how to cheat. It's a great way to teach students how to cheat. It's a great way to teach students how to cheat. Producing relatively good quality text using chat TPT is so easy a child can do it. And many children actually do. My point being, it's no surprise that many students use it as a shortcut. Chat isn't just academically dishonest. It kills school as a place of learning. But it also raises necessary questions about what learning really should entail. Beyond not being assessed properly, by relying too heavily on large language models to do the work for them, many students will utilize creative thinking less. If you don't cleverly phrase a thesis, chat's got your back. Still, the model is a long way from perfect. Lots of critics claim that AI-generated assignments lack the emotional depth that we humans have. If any listeners are currently writing their wedding vows, word of advice. You might want to ditch chat TPT and fix the tender love and care of old-fashioned composition. The philosopher Ivan Illich would agree with these concerns about the lack of emotional depth, not the wedding vows. He warned that an over-reliance on technology like chat TPT would limit one's ability to produce independent work. All your ideas would need to be filtered through an algorithm before they can be fashioned into an essay, piece of code, or math proof. Who's the main contributor to the final product at that stage? You or chat? And what about the ethical implications of using chat TPT? Chat functions by condensing sources on the internet into a single response without citing where the information came from. If there's one thing I learned as a freshman at Duke, it's that no one loathes plagiarism more than academics. Right. It's nearly impossible for someone in academia to research using open AI without violating a plethora of plagiarism laws. And before you say, can you ask chat to include citations, University of North Texas professor Brady Lund proved that while chat can use a natural language processing to add citations – oh, gosh – and before you say, can you ask chat to include citations, University of North Texas professor Brady Lund proved that while chat can use a natural language processing to add citations, it's still prone to errors like citing a source inaccurately or failing to cite an author entirely. With no way to check, how would you know if you're committing plagiarism? Is it even worth the risk? Chat errors go deeper than forgetting authors or confusing publication dates. Even in its primary responses about factual information, it lies. It can and will output false information as it pulls from the internet. If you are relying on chat GPT, you have no idea if it is correct unless you dig deeper yourself. But that's the thing. Many users don't dig deeper. They just accept chat's answers as the truth. But that's the thing. Many users don't dig deeper. They just accept chat's answers as the truth. What does that say about our society if the majority of people believe the words of a computer without a shred of doubt? And if people never use it to cheat, chat GPT will throw rents into academia. Education is built on the idea that grades show proficiency. But thanks to chat GPT, we can no longer depend on grades to prove what we understand. But thanks to chat GPT, we can no longer depend on grades to prove that we understand the content. And even if people never use it to cheat, chat GPT will still throw rents into academia. Education is built on the idea that grades show proficiency. But thanks to chat GPT, we can no longer depend on grades to prove that we understand the content. At this point, it seems that either chat has to adapt to the rules or the rules have to adapt to chat. Right. So if it has the potential to cause so many structural changes, why do we default to thinking chat is amazing and inevitable? We all seem to jump on the bandwagon of new technology like chat, thinking its apparent efficiency will only change our lives for the better. Those like Steve Jobs and Larry Page, creators of world-altering technologies, get praised in our society, making us feel almost wrong to question their implications.

Listen Next

Other Creators